Jan 29 11:58:08.431723 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:58:08.431745 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 11:58:08.431753 kernel: KASLR enabled Jan 29 11:58:08.431759 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 29 11:58:08.431766 kernel: printk: bootconsole [pl11] enabled Jan 29 11:58:08.431772 kernel: efi: EFI v2.7 by EDK II Jan 29 11:58:08.431779 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 29 11:58:08.431785 kernel: random: crng init done Jan 29 11:58:08.431791 kernel: ACPI: Early table checksum verification disabled Jan 29 11:58:08.431797 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 29 11:58:08.431803 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431809 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431816 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 29 11:58:08.431822 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431830 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431836 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431843 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431851 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431857 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431864 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 29 11:58:08.431870 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431876 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 29 11:58:08.431883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 29 11:58:08.431889 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 29 11:58:08.431895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 29 11:58:08.431902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 29 11:58:08.431908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 29 11:58:08.431914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 29 11:58:08.431922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 29 11:58:08.431928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 29 11:58:08.431935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 29 11:58:08.431941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 29 11:58:08.431947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 29 11:58:08.431954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 29 11:58:08.431960 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 29 11:58:08.431966 kernel: Zone ranges: Jan 29 11:58:08.431972 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 29 11:58:08.431978 kernel: DMA32 empty Jan 29 11:58:08.431985 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 11:58:08.431991 kernel: Movable zone start for each node Jan 29 11:58:08.432002 kernel: Early memory node ranges Jan 29 11:58:08.432008 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 29 11:58:08.432015 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 29 11:58:08.432022 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 29 11:58:08.432029 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 29 11:58:08.432037 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 29 11:58:08.432044 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 29 11:58:08.432051 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 11:58:08.432058 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 29 11:58:08.432064 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 29 11:58:08.432071 kernel: psci: probing for conduit method from ACPI. Jan 29 11:58:08.432078 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:58:08.432084 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:58:08.432091 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 29 11:58:08.432098 kernel: psci: SMC Calling Convention v1.4 Jan 29 11:58:08.432104 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 29 11:58:08.432111 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 29 11:58:08.432119 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:58:08.432126 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:58:08.432133 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:58:08.432140 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:58:08.432146 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:58:08.432153 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:58:08.432160 kernel: CPU features: detected: Spectre-BHB Jan 29 11:58:08.432167 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:58:08.432173 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:58:08.432180 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:58:08.432187 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 29 11:58:08.432195 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:58:08.432202 kernel: alternatives: applying boot alternatives Jan 29 11:58:08.432210 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:08.432217 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:58:08.432224 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:58:08.432231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:58:08.432238 kernel: Fallback order for Node 0: 0 Jan 29 11:58:08.432244 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 29 11:58:08.432251 kernel: Policy zone: Normal Jan 29 11:58:08.432258 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:58:08.432264 kernel: software IO TLB: area num 2. Jan 29 11:58:08.432273 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 29 11:58:08.432280 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Jan 29 11:58:08.432287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:58:08.432294 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:58:08.432301 kernel: rcu: RCU event tracing is enabled. Jan 29 11:58:08.432308 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:58:08.432315 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:58:08.432321 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:58:08.432328 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:58:08.432335 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:58:08.432342 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:58:08.432350 kernel: GICv3: 960 SPIs implemented Jan 29 11:58:08.432357 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:58:08.434409 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:58:08.434419 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:58:08.434426 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 29 11:58:08.434433 kernel: ITS: No ITS available, not enabling LPIs Jan 29 11:58:08.434441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:58:08.434448 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:08.434455 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:58:08.434462 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:58:08.434469 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:58:08.434481 kernel: Console: colour dummy device 80x25 Jan 29 11:58:08.434488 kernel: printk: console [tty1] enabled Jan 29 11:58:08.434495 kernel: ACPI: Core revision 20230628 Jan 29 11:58:08.434502 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:58:08.434510 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:58:08.434517 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:58:08.434524 kernel: landlock: Up and running. Jan 29 11:58:08.434531 kernel: SELinux: Initializing. Jan 29 11:58:08.434538 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.434545 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.434554 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:08.434561 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:08.434568 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 29 11:58:08.434575 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 29 11:58:08.434582 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 11:58:08.434589 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:58:08.434597 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:58:08.434611 kernel: Remapping and enabling EFI services. Jan 29 11:58:08.434618 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:58:08.434625 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:58:08.434633 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 29 11:58:08.434642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:08.434649 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:58:08.434656 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:58:08.434664 kernel: SMP: Total of 2 processors activated. Jan 29 11:58:08.434671 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:58:08.434680 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 29 11:58:08.434688 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:58:08.434695 kernel: CPU features: detected: CRC32 instructions Jan 29 11:58:08.434703 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:58:08.434710 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:58:08.434717 kernel: CPU features: detected: Privileged Access Never Jan 29 11:58:08.434725 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:58:08.434732 kernel: alternatives: applying system-wide alternatives Jan 29 11:58:08.434739 kernel: devtmpfs: initialized Jan 29 11:58:08.434748 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:58:08.434756 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:58:08.434763 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:58:08.434771 kernel: SMBIOS 3.1.0 present. Jan 29 11:58:08.434779 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 29 11:58:08.434786 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:58:08.434794 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:58:08.434801 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:58:08.434809 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:58:08.434818 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:58:08.434825 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 29 11:58:08.434833 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:58:08.434840 kernel: cpuidle: using governor menu Jan 29 11:58:08.434847 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:58:08.434855 kernel: ASID allocator initialised with 32768 entries Jan 29 11:58:08.434862 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:58:08.434869 kernel: Serial: AMBA PL011 UART driver Jan 29 11:58:08.434877 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:58:08.434886 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:58:08.434893 kernel: Modules: 509040 pages in range for PLT usage Jan 29 11:58:08.434900 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:58:08.434908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:58:08.434915 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:58:08.434923 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:58:08.434930 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:58:08.434937 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:58:08.434945 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:58:08.434954 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:58:08.434962 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:58:08.434969 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:58:08.434976 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:58:08.434984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:58:08.434991 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:58:08.434998 kernel: ACPI: Interpreter enabled Jan 29 11:58:08.435006 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:58:08.435013 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:58:08.435022 kernel: printk: console [ttyAMA0] enabled Jan 29 11:58:08.435030 kernel: printk: bootconsole [pl11] disabled Jan 29 11:58:08.435037 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 29 11:58:08.435045 kernel: iommu: Default domain type: Translated Jan 29 11:58:08.435052 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:58:08.435060 kernel: efivars: Registered efivars operations Jan 29 11:58:08.435067 kernel: vgaarb: loaded Jan 29 11:58:08.435074 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:58:08.435081 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:58:08.435091 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:58:08.435098 kernel: pnp: PnP ACPI init Jan 29 11:58:08.435105 kernel: pnp: PnP ACPI: found 0 devices Jan 29 11:58:08.435113 kernel: NET: Registered PF_INET protocol family Jan 29 11:58:08.435120 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:58:08.435128 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:58:08.435135 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:58:08.435143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:58:08.435150 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:58:08.435159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:58:08.435167 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.435174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.435182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:58:08.435189 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:58:08.435196 kernel: kvm [1]: HYP mode not available Jan 29 11:58:08.435203 kernel: Initialise system trusted keyrings Jan 29 11:58:08.435211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:58:08.435218 kernel: Key type asymmetric registered Jan 29 11:58:08.435227 kernel: Asymmetric key parser 'x509' registered Jan 29 11:58:08.435235 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:58:08.435242 kernel: io scheduler mq-deadline registered Jan 29 11:58:08.435249 kernel: io scheduler kyber registered Jan 29 11:58:08.435257 kernel: io scheduler bfq registered Jan 29 11:58:08.435264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:58:08.435271 kernel: thunder_xcv, ver 1.0 Jan 29 11:58:08.435278 kernel: thunder_bgx, ver 1.0 Jan 29 11:58:08.435286 kernel: nicpf, ver 1.0 Jan 29 11:58:08.435293 kernel: nicvf, ver 1.0 Jan 29 11:58:08.435459 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:58:08.435535 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:58:07 UTC (1738151887) Jan 29 11:58:08.435545 kernel: efifb: probing for efifb Jan 29 11:58:08.435553 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 11:58:08.435561 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 11:58:08.435568 kernel: efifb: scrolling: redraw Jan 29 11:58:08.435576 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 11:58:08.435586 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:58:08.435593 kernel: fb0: EFI VGA frame buffer device Jan 29 11:58:08.435601 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 29 11:58:08.435608 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:58:08.435616 kernel: No ACPI PMU IRQ for CPU0 Jan 29 11:58:08.435623 kernel: No ACPI PMU IRQ for CPU1 Jan 29 11:58:08.435630 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 29 11:58:08.435638 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:58:08.435645 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:58:08.435654 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:58:08.435662 kernel: Segment Routing with IPv6 Jan 29 11:58:08.435669 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:58:08.435676 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:58:08.435683 kernel: Key type dns_resolver registered Jan 29 11:58:08.435691 kernel: registered taskstats version 1 Jan 29 11:58:08.435698 kernel: Loading compiled-in X.509 certificates Jan 29 11:58:08.435705 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 11:58:08.435713 kernel: Key type .fscrypt registered Jan 29 11:58:08.435722 kernel: Key type fscrypt-provisioning registered Jan 29 11:58:08.435729 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:58:08.435737 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:58:08.435744 kernel: ima: No architecture policies found Jan 29 11:58:08.435751 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:58:08.435759 kernel: clk: Disabling unused clocks Jan 29 11:58:08.435766 kernel: Freeing unused kernel memory: 39360K Jan 29 11:58:08.435773 kernel: Run /init as init process Jan 29 11:58:08.435781 kernel: with arguments: Jan 29 11:58:08.435789 kernel: /init Jan 29 11:58:08.435797 kernel: with environment: Jan 29 11:58:08.435804 kernel: HOME=/ Jan 29 11:58:08.435811 kernel: TERM=linux Jan 29 11:58:08.435818 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:58:08.435828 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:08.435837 systemd[1]: Detected virtualization microsoft. Jan 29 11:58:08.435845 systemd[1]: Detected architecture arm64. Jan 29 11:58:08.435855 systemd[1]: Running in initrd. Jan 29 11:58:08.435862 systemd[1]: No hostname configured, using default hostname. Jan 29 11:58:08.435870 systemd[1]: Hostname set to . Jan 29 11:58:08.435878 systemd[1]: Initializing machine ID from random generator. Jan 29 11:58:08.435886 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:58:08.435894 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:08.435902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:08.435910 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:58:08.435920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:08.435928 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:58:08.435936 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:58:08.435945 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:58:08.435953 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:58:08.435961 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:08.435969 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:08.435978 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:08.435986 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:08.435994 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:08.436002 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:08.436010 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:08.436018 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:08.436026 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:58:08.436033 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:58:08.436043 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:08.436051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:08.436059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:08.436067 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:08.436075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:58:08.436083 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:08.436090 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:58:08.436098 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:58:08.436106 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:08.436134 systemd-journald[217]: Collecting audit messages is disabled. Jan 29 11:58:08.436154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:08.436162 systemd-journald[217]: Journal started Jan 29 11:58:08.436183 systemd-journald[217]: Runtime Journal (/run/log/journal/3cff260b373e40f1ab349109c2eda8b8) is 8.0M, max 78.5M, 70.5M free. Jan 29 11:58:08.444616 systemd-modules-load[218]: Inserted module 'overlay' Jan 29 11:58:08.455729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:08.474380 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:58:08.474426 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:08.487249 kernel: Bridge firewalling registered Jan 29 11:58:08.492190 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 29 11:58:08.493386 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:08.515249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:08.524121 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:58:08.537006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:08.549129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:08.579618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:08.597762 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:08.616216 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:58:08.635546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:08.658514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:08.686033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:08.693866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:58:08.714220 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:08.737936 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:58:08.751535 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:08.760532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:08.787397 dracut-cmdline[250]: dracut-dracut-053 Jan 29 11:58:08.794636 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:08.844732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:08.875917 systemd-resolved[254]: Positive Trust Anchors: Jan 29 11:58:08.880494 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:08.880528 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:08.956074 kernel: SCSI subsystem initialized Jan 29 11:58:08.882707 systemd-resolved[254]: Defaulting to hostname 'linux'. Jan 29 11:58:08.883506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:08.972933 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:58:08.893697 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:08.985453 kernel: iscsi: registered transport (tcp) Jan 29 11:58:09.004405 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:58:09.004425 kernel: QLogic iSCSI HBA Driver Jan 29 11:58:09.038142 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:09.055595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:58:09.090782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:58:09.090826 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:58:09.098593 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:58:09.147387 kernel: raid6: neonx8 gen() 15757 MB/s Jan 29 11:58:09.167371 kernel: raid6: neonx4 gen() 15663 MB/s Jan 29 11:58:09.188370 kernel: raid6: neonx2 gen() 13193 MB/s Jan 29 11:58:09.210377 kernel: raid6: neonx1 gen() 10482 MB/s Jan 29 11:58:09.230374 kernel: raid6: int64x8 gen() 6960 MB/s Jan 29 11:58:09.250370 kernel: raid6: int64x4 gen() 7353 MB/s Jan 29 11:58:09.272371 kernel: raid6: int64x2 gen() 6131 MB/s Jan 29 11:58:09.297166 kernel: raid6: int64x1 gen() 5058 MB/s Jan 29 11:58:09.297177 kernel: raid6: using algorithm neonx8 gen() 15757 MB/s Jan 29 11:58:09.323918 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Jan 29 11:58:09.323941 kernel: raid6: using neon recovery algorithm Jan 29 11:58:09.332372 kernel: xor: measuring software checksum speed Jan 29 11:58:09.336371 kernel: 8regs : 18722 MB/sec Jan 29 11:58:09.344749 kernel: 32regs : 18860 MB/sec Jan 29 11:58:09.344760 kernel: arm64_neon : 27105 MB/sec Jan 29 11:58:09.349746 kernel: xor: using function: arm64_neon (27105 MB/sec) Jan 29 11:58:09.402384 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:58:09.411714 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:09.430499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:09.455455 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 29 11:58:09.461612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:09.490542 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:58:09.505983 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Jan 29 11:58:09.533683 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:09.553614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:09.594666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:09.620566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:58:09.645138 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:09.660964 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:09.678218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:09.694160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:09.712400 kernel: hv_vmbus: Vmbus version:5.3 Jan 29 11:58:09.713563 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:58:09.730997 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:09.753518 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:09.804791 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 11:58:09.804813 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 11:58:09.804823 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 11:58:09.804833 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 29 11:58:09.804849 kernel: scsi host0: storvsc_host_t Jan 29 11:58:09.805013 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 11:58:09.805023 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 11:58:09.805033 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 11:58:09.828178 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 11:58:09.828213 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 11:58:09.828223 kernel: scsi host1: storvsc_host_t Jan 29 11:58:09.753677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:09.863053 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 29 11:58:09.863083 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 11:58:09.863203 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:09.870809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:09.871804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:09.880235 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:09.912645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:09.959903 kernel: PTP clock support registered Jan 29 11:58:09.959928 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 11:58:09.947428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:10.000295 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 11:58:10.337799 kernel: hv_vmbus: registering driver hv_utils Jan 29 11:58:10.337824 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 11:58:10.337949 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:58:10.338034 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 11:58:10.338118 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 11:58:10.338136 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: VF slot 1 added Jan 29 11:58:10.338227 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 11:58:10.338237 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 11:58:10.338321 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 11:58:10.338330 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:10.338340 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:58:10.023549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:10.324850 systemd-resolved[254]: Clock change detected. Flushing caches. Jan 29 11:58:10.396056 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 11:58:10.418419 kernel: hv_vmbus: registering driver hv_pci Jan 29 11:58:10.418436 kernel: hv_pci f1242398-b894-4440-9f8b-a5e7b7cb9a42: PCI VMBus probing: Using version 0x10004 Jan 29 11:58:10.483210 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:58:10.483229 kernel: hv_pci f1242398-b894-4440-9f8b-a5e7b7cb9a42: PCI host bridge to bus b894:00 Jan 29 11:58:10.483357 kernel: pci_bus b894:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 29 11:58:10.483466 kernel: pci_bus b894:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 11:58:10.483548 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 11:58:10.484057 kernel: pci b894:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 29 11:58:10.484186 kernel: pci b894:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 11:58:10.484282 kernel: pci b894:00:02.0: enabling Extended Tags Jan 29 11:58:10.484375 kernel: pci b894:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b894:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 29 11:58:10.484466 kernel: pci_bus b894:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 11:58:10.484546 kernel: pci b894:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 11:58:10.387392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:10.531130 kernel: mlx5_core b894:00:02.0: enabling device (0000 -> 0002) Jan 29 11:58:10.753919 kernel: mlx5_core b894:00:02.0: firmware version: 16.30.1284 Jan 29 11:58:10.754065 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: VF registering: eth1 Jan 29 11:58:10.754160 kernel: mlx5_core b894:00:02.0 eth1: joined to eth0 Jan 29 11:58:10.754252 kernel: mlx5_core b894:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 29 11:58:10.763656 kernel: mlx5_core b894:00:02.0 enP47252s1: renamed from eth1 Jan 29 11:58:10.955828 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (481) Jan 29 11:58:10.968414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 11:58:10.983912 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 11:58:11.033635 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (487) Jan 29 11:58:11.047788 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 11:58:11.056338 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 11:58:11.087896 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:58:11.111540 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 11:58:11.136639 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:11.146647 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:12.155638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:12.156980 disk-uuid[598]: The operation has completed successfully. Jan 29 11:58:12.212632 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:58:12.212738 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:58:12.259764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:58:12.273312 sh[684]: Success Jan 29 11:58:12.305665 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:58:12.516979 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:58:12.528756 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:58:12.538246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:58:12.567726 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 11:58:12.567770 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:12.574779 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:58:12.581180 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:58:12.586053 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:58:12.878212 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:58:12.884530 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:58:12.903886 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:58:12.912769 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:58:12.953192 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:12.953244 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:12.958643 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:12.986672 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:12.994369 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:58:13.006398 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:13.012062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:58:13.026885 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:58:13.055152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:13.077743 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:13.107725 systemd-networkd[868]: lo: Link UP Jan 29 11:58:13.107736 systemd-networkd[868]: lo: Gained carrier Jan 29 11:58:13.109368 systemd-networkd[868]: Enumeration completed Jan 29 11:58:13.109985 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:13.109988 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:13.112217 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:13.124065 systemd[1]: Reached target network.target - Network. Jan 29 11:58:13.186635 kernel: mlx5_core b894:00:02.0 enP47252s1: Link up Jan 29 11:58:13.256641 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: Data path switched to VF: enP47252s1 Jan 29 11:58:13.257285 systemd-networkd[868]: enP47252s1: Link UP Jan 29 11:58:13.257378 systemd-networkd[868]: eth0: Link UP Jan 29 11:58:13.257502 systemd-networkd[868]: eth0: Gained carrier Jan 29 11:58:13.257509 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:13.273010 systemd-networkd[868]: enP47252s1: Gained carrier Jan 29 11:58:13.303679 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 11:58:14.041304 ignition[848]: Ignition 2.19.0 Jan 29 11:58:14.041319 ignition[848]: Stage: fetch-offline Jan 29 11:58:14.041354 ignition[848]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.046167 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:14.041362 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.068882 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:58:14.041465 ignition[848]: parsed url from cmdline: "" Jan 29 11:58:14.041468 ignition[848]: no config URL provided Jan 29 11:58:14.041473 ignition[848]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:14.041481 ignition[848]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:14.041485 ignition[848]: failed to fetch config: resource requires networking Jan 29 11:58:14.045231 ignition[848]: Ignition finished successfully Jan 29 11:58:14.103896 ignition[879]: Ignition 2.19.0 Jan 29 11:58:14.103901 ignition[879]: Stage: fetch Jan 29 11:58:14.104105 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.104121 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.104239 ignition[879]: parsed url from cmdline: "" Jan 29 11:58:14.104242 ignition[879]: no config URL provided Jan 29 11:58:14.104247 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:14.104254 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:14.104277 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 11:58:14.215356 ignition[879]: GET result: OK Jan 29 11:58:14.215484 ignition[879]: config has been read from IMDS userdata Jan 29 11:58:14.215528 ignition[879]: parsing config with SHA512: 3861e0299827f43b2789403ce165fecc92d593b7221a0195f88d8efa67b3a63ef001212f167e2c7b5d41953e8440492718d14308370101357c90bab721d52e34 Jan 29 11:58:14.219948 unknown[879]: fetched base config from "system" Jan 29 11:58:14.220405 ignition[879]: fetch: fetch complete Jan 29 11:58:14.219956 unknown[879]: fetched base config from "system" Jan 29 11:58:14.220412 ignition[879]: fetch: fetch passed Jan 29 11:58:14.219960 unknown[879]: fetched user config from "azure" Jan 29 11:58:14.220452 ignition[879]: Ignition finished successfully Jan 29 11:58:14.226313 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:58:14.252887 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:58:14.278800 ignition[885]: Ignition 2.19.0 Jan 29 11:58:14.278806 ignition[885]: Stage: kargs Jan 29 11:58:14.291107 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:58:14.279006 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.279015 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.311549 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:58:14.280119 ignition[885]: kargs: kargs passed Jan 29 11:58:14.280161 ignition[885]: Ignition finished successfully Jan 29 11:58:14.335629 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:58:14.333573 ignition[891]: Ignition 2.19.0 Jan 29 11:58:14.344657 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:14.333579 ignition[891]: Stage: disks Jan 29 11:58:14.355664 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:58:14.333771 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.369249 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:14.333780 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.378805 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:14.334700 ignition[891]: disks: disks passed Jan 29 11:58:14.392241 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:14.334743 ignition[891]: Ignition finished successfully Jan 29 11:58:14.420829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:58:14.492446 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 11:58:14.497060 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:58:14.520845 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:58:14.580633 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 11:58:14.580729 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:58:14.586817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:14.638708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:14.650764 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:58:14.660796 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:58:14.688841 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (911) Jan 29 11:58:14.682276 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:58:14.682334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:14.738819 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:14.738840 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:14.738850 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:14.727888 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:58:14.758642 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:14.759847 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:58:14.768103 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:14.811798 systemd-networkd[868]: eth0: Gained IPv6LL Jan 29 11:58:15.109875 coreos-metadata[913]: Jan 29 11:58:15.109 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 11:58:15.121019 coreos-metadata[913]: Jan 29 11:58:15.117 INFO Fetch successful Jan 29 11:58:15.121019 coreos-metadata[913]: Jan 29 11:58:15.117 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 11:58:15.142940 coreos-metadata[913]: Jan 29 11:58:15.130 INFO Fetch successful Jan 29 11:58:15.149804 coreos-metadata[913]: Jan 29 11:58:15.149 INFO wrote hostname ci-4081.3.0-a-ecab7ceadc to /sysroot/etc/hostname Jan 29 11:58:15.161473 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:58:15.195762 systemd-networkd[868]: enP47252s1: Gained IPv6LL Jan 29 11:58:15.474811 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:58:15.527697 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:58:15.537301 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:58:15.560316 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:58:16.564801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:16.584891 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:58:16.596554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:58:16.626634 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:16.635458 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:58:16.660471 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:58:16.670920 ignition[1028]: INFO : Ignition 2.19.0 Jan 29 11:58:16.680308 ignition[1028]: INFO : Stage: mount Jan 29 11:58:16.680308 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:16.680308 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:16.680308 ignition[1028]: INFO : mount: mount passed Jan 29 11:58:16.680308 ignition[1028]: INFO : Ignition finished successfully Jan 29 11:58:16.680979 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:58:16.715809 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:58:16.740835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:16.783212 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1040) Jan 29 11:58:16.783261 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:16.791331 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:16.791628 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:16.805649 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:16.806785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:16.836955 ignition[1058]: INFO : Ignition 2.19.0 Jan 29 11:58:16.842512 ignition[1058]: INFO : Stage: files Jan 29 11:58:16.842512 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:16.842512 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:16.842512 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:58:16.887541 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:58:16.887541 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:58:16.952900 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:58:16.962213 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:58:16.962213 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:58:16.953332 unknown[1058]: wrote ssh authorized keys file for user: core Jan 29 11:58:16.988965 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:58:17.002731 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:58:17.058086 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:58:17.167186 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:58:17.167186 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:58:17.662135 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:58:17.867798 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.867798 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:58:17.891010 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: files passed Jan 29 11:58:17.905065 ignition[1058]: INFO : Ignition finished successfully Jan 29 11:58:17.893510 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:58:17.943398 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:58:17.960775 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:58:17.990380 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:58:18.059632 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:18.059632 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:17.990485 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:58:18.091706 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:18.025938 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:18.035421 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:58:18.083886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:58:18.126918 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:58:18.128653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:58:18.146434 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:58:18.159686 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:58:18.173367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:58:18.185842 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:58:18.217112 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:18.238917 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:58:18.260066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:58:18.260159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:58:18.274807 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:18.289462 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:18.303671 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:58:18.316902 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:58:18.316975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:18.337487 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:58:18.351683 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:58:18.363448 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:58:18.376737 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:18.391032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:18.404421 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:58:18.417868 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:18.433732 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:58:18.449889 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:58:18.464002 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:58:18.475874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:58:18.475950 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:18.493668 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:18.500606 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:18.513970 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:58:18.520538 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:18.528441 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:58:18.528507 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:18.549929 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:58:18.549983 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:18.566411 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:58:18.566460 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:58:18.578050 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:58:18.578093 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:58:18.650186 ignition[1110]: INFO : Ignition 2.19.0 Jan 29 11:58:18.650186 ignition[1110]: INFO : Stage: umount Jan 29 11:58:18.650186 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:18.650186 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:18.650186 ignition[1110]: INFO : umount: umount passed Jan 29 11:58:18.650186 ignition[1110]: INFO : Ignition finished successfully Jan 29 11:58:18.612830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:58:18.631698 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:58:18.631784 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:18.644740 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:58:18.668912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:58:18.668991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:18.683730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:58:18.683770 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:18.703573 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:58:18.703676 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:58:18.715706 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:58:18.715799 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:58:18.727730 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:58:18.727777 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:58:18.740344 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:58:18.740397 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:58:18.753218 systemd[1]: Stopped target network.target - Network. Jan 29 11:58:18.758857 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:58:18.758921 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:18.773336 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:58:18.785700 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:58:18.792140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:18.799202 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:58:18.805058 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:58:18.816993 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:58:18.817044 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:18.829333 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:58:18.829375 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:18.841729 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:58:18.841789 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:58:18.855014 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:58:18.855061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:18.868436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:58:18.881326 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:58:18.896871 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:58:18.901604 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 29 11:58:18.903316 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:58:18.903448 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:58:19.172873 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: Data path switched from VF: enP47252s1 Jan 29 11:58:18.918888 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:58:18.918999 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:58:18.939384 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:58:18.939458 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:18.971835 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:58:18.984501 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:58:18.984570 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:19.007668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:58:19.007722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:19.020906 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:58:19.020948 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:19.033206 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:58:19.033255 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:19.054462 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:19.075218 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:58:19.075311 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:58:19.099310 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:58:19.099437 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:19.119488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:58:19.119562 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:19.131469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:58:19.131503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:19.143216 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:58:19.143269 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:19.167436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:58:19.167498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:19.183422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:19.416257 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 29 11:58:19.183475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:19.195865 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:58:19.195931 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:19.213820 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:58:19.227675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:58:19.227742 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:19.241876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:19.241932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:19.255970 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:58:19.256068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:58:19.270158 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:58:19.270253 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:58:19.281521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:58:19.312869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:58:19.335790 systemd[1]: Switching root. Jan 29 11:58:19.518666 systemd-journald[217]: Journal stopped Jan 29 11:58:08.431723 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:58:08.431745 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 11:58:08.431753 kernel: KASLR enabled Jan 29 11:58:08.431759 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 29 11:58:08.431766 kernel: printk: bootconsole [pl11] enabled Jan 29 11:58:08.431772 kernel: efi: EFI v2.7 by EDK II Jan 29 11:58:08.431779 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 29 11:58:08.431785 kernel: random: crng init done Jan 29 11:58:08.431791 kernel: ACPI: Early table checksum verification disabled Jan 29 11:58:08.431797 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 29 11:58:08.431803 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431809 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431816 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 29 11:58:08.431822 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431830 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431836 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431843 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431851 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431857 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431864 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 29 11:58:08.431870 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 11:58:08.431876 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 29 11:58:08.431883 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 29 11:58:08.431889 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 29 11:58:08.431895 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 29 11:58:08.431902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 29 11:58:08.431908 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 29 11:58:08.431914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 29 11:58:08.431922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 29 11:58:08.431928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 29 11:58:08.431935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 29 11:58:08.431941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 29 11:58:08.431947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 29 11:58:08.431954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 29 11:58:08.431960 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 29 11:58:08.431966 kernel: Zone ranges: Jan 29 11:58:08.431972 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 29 11:58:08.431978 kernel: DMA32 empty Jan 29 11:58:08.431985 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 11:58:08.431991 kernel: Movable zone start for each node Jan 29 11:58:08.432002 kernel: Early memory node ranges Jan 29 11:58:08.432008 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 29 11:58:08.432015 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 29 11:58:08.432022 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 29 11:58:08.432029 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 29 11:58:08.432037 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 29 11:58:08.432044 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 29 11:58:08.432051 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 11:58:08.432058 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 29 11:58:08.432064 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 29 11:58:08.432071 kernel: psci: probing for conduit method from ACPI. Jan 29 11:58:08.432078 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:58:08.432084 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:58:08.432091 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 29 11:58:08.432098 kernel: psci: SMC Calling Convention v1.4 Jan 29 11:58:08.432104 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 29 11:58:08.432111 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 29 11:58:08.432119 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:58:08.432126 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:58:08.432133 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:58:08.432140 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:58:08.432146 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:58:08.432153 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:58:08.432160 kernel: CPU features: detected: Spectre-BHB Jan 29 11:58:08.432167 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:58:08.432173 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:58:08.432180 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:58:08.432187 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 29 11:58:08.432195 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:58:08.432202 kernel: alternatives: applying boot alternatives Jan 29 11:58:08.432210 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:08.432217 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:58:08.432224 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:58:08.432231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:58:08.432238 kernel: Fallback order for Node 0: 0 Jan 29 11:58:08.432244 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 29 11:58:08.432251 kernel: Policy zone: Normal Jan 29 11:58:08.432258 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:58:08.432264 kernel: software IO TLB: area num 2. Jan 29 11:58:08.432273 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 29 11:58:08.432280 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Jan 29 11:58:08.432287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:58:08.432294 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:58:08.432301 kernel: rcu: RCU event tracing is enabled. Jan 29 11:58:08.432308 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:58:08.432315 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:58:08.432321 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:58:08.432328 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:58:08.432335 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:58:08.432342 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:58:08.432350 kernel: GICv3: 960 SPIs implemented Jan 29 11:58:08.432357 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:58:08.434409 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:58:08.434419 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:58:08.434426 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 29 11:58:08.434433 kernel: ITS: No ITS available, not enabling LPIs Jan 29 11:58:08.434441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:58:08.434448 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:08.434455 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:58:08.434462 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:58:08.434469 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:58:08.434481 kernel: Console: colour dummy device 80x25 Jan 29 11:58:08.434488 kernel: printk: console [tty1] enabled Jan 29 11:58:08.434495 kernel: ACPI: Core revision 20230628 Jan 29 11:58:08.434502 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:58:08.434510 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:58:08.434517 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:58:08.434524 kernel: landlock: Up and running. Jan 29 11:58:08.434531 kernel: SELinux: Initializing. Jan 29 11:58:08.434538 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.434545 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.434554 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:08.434561 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:08.434568 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 29 11:58:08.434575 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 29 11:58:08.434582 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 11:58:08.434589 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:58:08.434597 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:58:08.434611 kernel: Remapping and enabling EFI services. Jan 29 11:58:08.434618 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:58:08.434625 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:58:08.434633 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 29 11:58:08.434642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:58:08.434649 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:58:08.434656 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:58:08.434664 kernel: SMP: Total of 2 processors activated. Jan 29 11:58:08.434671 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:58:08.434680 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 29 11:58:08.434688 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:58:08.434695 kernel: CPU features: detected: CRC32 instructions Jan 29 11:58:08.434703 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:58:08.434710 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:58:08.434717 kernel: CPU features: detected: Privileged Access Never Jan 29 11:58:08.434725 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:58:08.434732 kernel: alternatives: applying system-wide alternatives Jan 29 11:58:08.434739 kernel: devtmpfs: initialized Jan 29 11:58:08.434748 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:58:08.434756 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:58:08.434763 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:58:08.434771 kernel: SMBIOS 3.1.0 present. Jan 29 11:58:08.434779 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 29 11:58:08.434786 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:58:08.434794 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:58:08.434801 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:58:08.434809 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:58:08.434818 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:58:08.434825 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 29 11:58:08.434833 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:58:08.434840 kernel: cpuidle: using governor menu Jan 29 11:58:08.434847 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:58:08.434855 kernel: ASID allocator initialised with 32768 entries Jan 29 11:58:08.434862 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:58:08.434869 kernel: Serial: AMBA PL011 UART driver Jan 29 11:58:08.434877 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:58:08.434886 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:58:08.434893 kernel: Modules: 509040 pages in range for PLT usage Jan 29 11:58:08.434900 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:58:08.434908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:58:08.434915 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:58:08.434923 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:58:08.434930 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:58:08.434937 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:58:08.434945 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:58:08.434954 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:58:08.434962 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:58:08.434969 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:58:08.434976 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:58:08.434984 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:58:08.434991 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:58:08.434998 kernel: ACPI: Interpreter enabled Jan 29 11:58:08.435006 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:58:08.435013 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:58:08.435022 kernel: printk: console [ttyAMA0] enabled Jan 29 11:58:08.435030 kernel: printk: bootconsole [pl11] disabled Jan 29 11:58:08.435037 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 29 11:58:08.435045 kernel: iommu: Default domain type: Translated Jan 29 11:58:08.435052 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:58:08.435060 kernel: efivars: Registered efivars operations Jan 29 11:58:08.435067 kernel: vgaarb: loaded Jan 29 11:58:08.435074 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:58:08.435081 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:58:08.435091 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:58:08.435098 kernel: pnp: PnP ACPI init Jan 29 11:58:08.435105 kernel: pnp: PnP ACPI: found 0 devices Jan 29 11:58:08.435113 kernel: NET: Registered PF_INET protocol family Jan 29 11:58:08.435120 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:58:08.435128 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:58:08.435135 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:58:08.435143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:58:08.435150 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:58:08.435159 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:58:08.435167 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.435174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:58:08.435182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:58:08.435189 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:58:08.435196 kernel: kvm [1]: HYP mode not available Jan 29 11:58:08.435203 kernel: Initialise system trusted keyrings Jan 29 11:58:08.435211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:58:08.435218 kernel: Key type asymmetric registered Jan 29 11:58:08.435227 kernel: Asymmetric key parser 'x509' registered Jan 29 11:58:08.435235 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:58:08.435242 kernel: io scheduler mq-deadline registered Jan 29 11:58:08.435249 kernel: io scheduler kyber registered Jan 29 11:58:08.435257 kernel: io scheduler bfq registered Jan 29 11:58:08.435264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:58:08.435271 kernel: thunder_xcv, ver 1.0 Jan 29 11:58:08.435278 kernel: thunder_bgx, ver 1.0 Jan 29 11:58:08.435286 kernel: nicpf, ver 1.0 Jan 29 11:58:08.435293 kernel: nicvf, ver 1.0 Jan 29 11:58:08.435459 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:58:08.435535 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:58:07 UTC (1738151887) Jan 29 11:58:08.435545 kernel: efifb: probing for efifb Jan 29 11:58:08.435553 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 11:58:08.435561 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 11:58:08.435568 kernel: efifb: scrolling: redraw Jan 29 11:58:08.435576 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 11:58:08.435586 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:58:08.435593 kernel: fb0: EFI VGA frame buffer device Jan 29 11:58:08.435601 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 29 11:58:08.435608 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:58:08.435616 kernel: No ACPI PMU IRQ for CPU0 Jan 29 11:58:08.435623 kernel: No ACPI PMU IRQ for CPU1 Jan 29 11:58:08.435630 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 29 11:58:08.435638 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:58:08.435645 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:58:08.435654 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:58:08.435662 kernel: Segment Routing with IPv6 Jan 29 11:58:08.435669 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:58:08.435676 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:58:08.435683 kernel: Key type dns_resolver registered Jan 29 11:58:08.435691 kernel: registered taskstats version 1 Jan 29 11:58:08.435698 kernel: Loading compiled-in X.509 certificates Jan 29 11:58:08.435705 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 11:58:08.435713 kernel: Key type .fscrypt registered Jan 29 11:58:08.435722 kernel: Key type fscrypt-provisioning registered Jan 29 11:58:08.435729 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:58:08.435737 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:58:08.435744 kernel: ima: No architecture policies found Jan 29 11:58:08.435751 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:58:08.435759 kernel: clk: Disabling unused clocks Jan 29 11:58:08.435766 kernel: Freeing unused kernel memory: 39360K Jan 29 11:58:08.435773 kernel: Run /init as init process Jan 29 11:58:08.435781 kernel: with arguments: Jan 29 11:58:08.435789 kernel: /init Jan 29 11:58:08.435797 kernel: with environment: Jan 29 11:58:08.435804 kernel: HOME=/ Jan 29 11:58:08.435811 kernel: TERM=linux Jan 29 11:58:08.435818 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:58:08.435828 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:08.435837 systemd[1]: Detected virtualization microsoft. Jan 29 11:58:08.435845 systemd[1]: Detected architecture arm64. Jan 29 11:58:08.435855 systemd[1]: Running in initrd. Jan 29 11:58:08.435862 systemd[1]: No hostname configured, using default hostname. Jan 29 11:58:08.435870 systemd[1]: Hostname set to . Jan 29 11:58:08.435878 systemd[1]: Initializing machine ID from random generator. Jan 29 11:58:08.435886 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:58:08.435894 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:08.435902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:08.435910 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:58:08.435920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:08.435928 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:58:08.435936 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:58:08.435945 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:58:08.435953 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:58:08.435961 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:08.435969 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:08.435978 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:08.435986 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:08.435994 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:08.436002 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:08.436010 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:08.436018 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:08.436026 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:58:08.436033 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:58:08.436043 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:08.436051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:08.436059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:08.436067 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:08.436075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:58:08.436083 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:08.436090 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:58:08.436098 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:58:08.436106 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:08.436134 systemd-journald[217]: Collecting audit messages is disabled. Jan 29 11:58:08.436154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:08.436162 systemd-journald[217]: Journal started Jan 29 11:58:08.436183 systemd-journald[217]: Runtime Journal (/run/log/journal/3cff260b373e40f1ab349109c2eda8b8) is 8.0M, max 78.5M, 70.5M free. Jan 29 11:58:08.444616 systemd-modules-load[218]: Inserted module 'overlay' Jan 29 11:58:08.455729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:08.474380 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:58:08.474426 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:08.487249 kernel: Bridge firewalling registered Jan 29 11:58:08.492190 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 29 11:58:08.493386 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:08.515249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:08.524121 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:58:08.537006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:08.549129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:08.579618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:08.597762 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:08.616216 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:58:08.635546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:08.658514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:08.686033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:08.693866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:58:08.714220 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:08.737936 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:58:08.751535 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:08.760532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:08.787397 dracut-cmdline[250]: dracut-dracut-053 Jan 29 11:58:08.794636 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:58:08.844732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:08.875917 systemd-resolved[254]: Positive Trust Anchors: Jan 29 11:58:08.880494 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:08.880528 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:08.956074 kernel: SCSI subsystem initialized Jan 29 11:58:08.882707 systemd-resolved[254]: Defaulting to hostname 'linux'. Jan 29 11:58:08.883506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:08.972933 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:58:08.893697 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:08.985453 kernel: iscsi: registered transport (tcp) Jan 29 11:58:09.004405 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:58:09.004425 kernel: QLogic iSCSI HBA Driver Jan 29 11:58:09.038142 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:09.055595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:58:09.090782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:58:09.090826 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:58:09.098593 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:58:09.147387 kernel: raid6: neonx8 gen() 15757 MB/s Jan 29 11:58:09.167371 kernel: raid6: neonx4 gen() 15663 MB/s Jan 29 11:58:09.188370 kernel: raid6: neonx2 gen() 13193 MB/s Jan 29 11:58:09.210377 kernel: raid6: neonx1 gen() 10482 MB/s Jan 29 11:58:09.230374 kernel: raid6: int64x8 gen() 6960 MB/s Jan 29 11:58:09.250370 kernel: raid6: int64x4 gen() 7353 MB/s Jan 29 11:58:09.272371 kernel: raid6: int64x2 gen() 6131 MB/s Jan 29 11:58:09.297166 kernel: raid6: int64x1 gen() 5058 MB/s Jan 29 11:58:09.297177 kernel: raid6: using algorithm neonx8 gen() 15757 MB/s Jan 29 11:58:09.323918 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Jan 29 11:58:09.323941 kernel: raid6: using neon recovery algorithm Jan 29 11:58:09.332372 kernel: xor: measuring software checksum speed Jan 29 11:58:09.336371 kernel: 8regs : 18722 MB/sec Jan 29 11:58:09.344749 kernel: 32regs : 18860 MB/sec Jan 29 11:58:09.344760 kernel: arm64_neon : 27105 MB/sec Jan 29 11:58:09.349746 kernel: xor: using function: arm64_neon (27105 MB/sec) Jan 29 11:58:09.402384 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:58:09.411714 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:09.430499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:09.455455 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 29 11:58:09.461612 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:09.490542 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:58:09.505983 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Jan 29 11:58:09.533683 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:09.553614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:09.594666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:09.620566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:58:09.645138 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:09.660964 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:09.678218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:09.694160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:09.712400 kernel: hv_vmbus: Vmbus version:5.3 Jan 29 11:58:09.713563 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:58:09.730997 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:09.753518 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:09.804791 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 11:58:09.804813 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 11:58:09.804823 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 11:58:09.804833 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 29 11:58:09.804849 kernel: scsi host0: storvsc_host_t Jan 29 11:58:09.805013 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 11:58:09.805023 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 11:58:09.805033 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 11:58:09.828178 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 11:58:09.828213 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 11:58:09.828223 kernel: scsi host1: storvsc_host_t Jan 29 11:58:09.753677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:09.863053 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 29 11:58:09.863083 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 11:58:09.863203 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:09.870809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:09.871804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:09.880235 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:09.912645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:09.959903 kernel: PTP clock support registered Jan 29 11:58:09.959928 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 11:58:09.947428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:10.000295 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 11:58:10.337799 kernel: hv_vmbus: registering driver hv_utils Jan 29 11:58:10.337824 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 11:58:10.337949 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:58:10.338034 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 11:58:10.338118 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 11:58:10.338136 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: VF slot 1 added Jan 29 11:58:10.338227 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 11:58:10.338237 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 11:58:10.338321 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 11:58:10.338330 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:10.338340 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:58:10.023549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:10.324850 systemd-resolved[254]: Clock change detected. Flushing caches. Jan 29 11:58:10.396056 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 11:58:10.418419 kernel: hv_vmbus: registering driver hv_pci Jan 29 11:58:10.418436 kernel: hv_pci f1242398-b894-4440-9f8b-a5e7b7cb9a42: PCI VMBus probing: Using version 0x10004 Jan 29 11:58:10.483210 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:58:10.483229 kernel: hv_pci f1242398-b894-4440-9f8b-a5e7b7cb9a42: PCI host bridge to bus b894:00 Jan 29 11:58:10.483357 kernel: pci_bus b894:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 29 11:58:10.483466 kernel: pci_bus b894:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 11:58:10.483548 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 11:58:10.484057 kernel: pci b894:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 29 11:58:10.484186 kernel: pci b894:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 11:58:10.484282 kernel: pci b894:00:02.0: enabling Extended Tags Jan 29 11:58:10.484375 kernel: pci b894:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b894:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 29 11:58:10.484466 kernel: pci_bus b894:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 11:58:10.484546 kernel: pci b894:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 11:58:10.387392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:10.531130 kernel: mlx5_core b894:00:02.0: enabling device (0000 -> 0002) Jan 29 11:58:10.753919 kernel: mlx5_core b894:00:02.0: firmware version: 16.30.1284 Jan 29 11:58:10.754065 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: VF registering: eth1 Jan 29 11:58:10.754160 kernel: mlx5_core b894:00:02.0 eth1: joined to eth0 Jan 29 11:58:10.754252 kernel: mlx5_core b894:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 29 11:58:10.763656 kernel: mlx5_core b894:00:02.0 enP47252s1: renamed from eth1 Jan 29 11:58:10.955828 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (481) Jan 29 11:58:10.968414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 11:58:10.983912 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 11:58:11.033635 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (487) Jan 29 11:58:11.047788 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 11:58:11.056338 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 11:58:11.087896 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:58:11.111540 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 11:58:11.136639 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:11.146647 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:12.155638 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:58:12.156980 disk-uuid[598]: The operation has completed successfully. Jan 29 11:58:12.212632 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:58:12.212738 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:58:12.259764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:58:12.273312 sh[684]: Success Jan 29 11:58:12.305665 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:58:12.516979 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:58:12.528756 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:58:12.538246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:58:12.567726 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 11:58:12.567770 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:12.574779 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:58:12.581180 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:58:12.586053 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:58:12.878212 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:58:12.884530 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:58:12.903886 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:58:12.912769 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:58:12.953192 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:12.953244 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:12.958643 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:12.986672 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:12.994369 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:58:13.006398 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:13.012062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:58:13.026885 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:58:13.055152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:13.077743 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:13.107725 systemd-networkd[868]: lo: Link UP Jan 29 11:58:13.107736 systemd-networkd[868]: lo: Gained carrier Jan 29 11:58:13.109368 systemd-networkd[868]: Enumeration completed Jan 29 11:58:13.109985 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:13.109988 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:13.112217 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:13.124065 systemd[1]: Reached target network.target - Network. Jan 29 11:58:13.186635 kernel: mlx5_core b894:00:02.0 enP47252s1: Link up Jan 29 11:58:13.256641 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: Data path switched to VF: enP47252s1 Jan 29 11:58:13.257285 systemd-networkd[868]: enP47252s1: Link UP Jan 29 11:58:13.257378 systemd-networkd[868]: eth0: Link UP Jan 29 11:58:13.257502 systemd-networkd[868]: eth0: Gained carrier Jan 29 11:58:13.257509 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:13.273010 systemd-networkd[868]: enP47252s1: Gained carrier Jan 29 11:58:13.303679 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 11:58:14.041304 ignition[848]: Ignition 2.19.0 Jan 29 11:58:14.041319 ignition[848]: Stage: fetch-offline Jan 29 11:58:14.041354 ignition[848]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.046167 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:14.041362 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.068882 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:58:14.041465 ignition[848]: parsed url from cmdline: "" Jan 29 11:58:14.041468 ignition[848]: no config URL provided Jan 29 11:58:14.041473 ignition[848]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:14.041481 ignition[848]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:14.041485 ignition[848]: failed to fetch config: resource requires networking Jan 29 11:58:14.045231 ignition[848]: Ignition finished successfully Jan 29 11:58:14.103896 ignition[879]: Ignition 2.19.0 Jan 29 11:58:14.103901 ignition[879]: Stage: fetch Jan 29 11:58:14.104105 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.104121 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.104239 ignition[879]: parsed url from cmdline: "" Jan 29 11:58:14.104242 ignition[879]: no config URL provided Jan 29 11:58:14.104247 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:14.104254 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:14.104277 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 11:58:14.215356 ignition[879]: GET result: OK Jan 29 11:58:14.215484 ignition[879]: config has been read from IMDS userdata Jan 29 11:58:14.215528 ignition[879]: parsing config with SHA512: 3861e0299827f43b2789403ce165fecc92d593b7221a0195f88d8efa67b3a63ef001212f167e2c7b5d41953e8440492718d14308370101357c90bab721d52e34 Jan 29 11:58:14.219948 unknown[879]: fetched base config from "system" Jan 29 11:58:14.220405 ignition[879]: fetch: fetch complete Jan 29 11:58:14.219956 unknown[879]: fetched base config from "system" Jan 29 11:58:14.220412 ignition[879]: fetch: fetch passed Jan 29 11:58:14.219960 unknown[879]: fetched user config from "azure" Jan 29 11:58:14.220452 ignition[879]: Ignition finished successfully Jan 29 11:58:14.226313 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:58:14.252887 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:58:14.278800 ignition[885]: Ignition 2.19.0 Jan 29 11:58:14.278806 ignition[885]: Stage: kargs Jan 29 11:58:14.291107 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:58:14.279006 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.279015 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.311549 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:58:14.280119 ignition[885]: kargs: kargs passed Jan 29 11:58:14.280161 ignition[885]: Ignition finished successfully Jan 29 11:58:14.335629 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:58:14.333573 ignition[891]: Ignition 2.19.0 Jan 29 11:58:14.344657 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:14.333579 ignition[891]: Stage: disks Jan 29 11:58:14.355664 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:58:14.333771 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.369249 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:14.333780 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:14.378805 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:14.334700 ignition[891]: disks: disks passed Jan 29 11:58:14.392241 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:14.334743 ignition[891]: Ignition finished successfully Jan 29 11:58:14.420829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:58:14.492446 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 11:58:14.497060 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:58:14.520845 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:58:14.580633 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 11:58:14.580729 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:58:14.586817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:14.638708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:14.650764 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:58:14.660796 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:58:14.688841 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (911) Jan 29 11:58:14.682276 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:58:14.682334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:14.738819 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:14.738840 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:14.738850 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:14.727888 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:58:14.758642 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:14.759847 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:58:14.768103 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:14.811798 systemd-networkd[868]: eth0: Gained IPv6LL Jan 29 11:58:15.109875 coreos-metadata[913]: Jan 29 11:58:15.109 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 11:58:15.121019 coreos-metadata[913]: Jan 29 11:58:15.117 INFO Fetch successful Jan 29 11:58:15.121019 coreos-metadata[913]: Jan 29 11:58:15.117 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 11:58:15.142940 coreos-metadata[913]: Jan 29 11:58:15.130 INFO Fetch successful Jan 29 11:58:15.149804 coreos-metadata[913]: Jan 29 11:58:15.149 INFO wrote hostname ci-4081.3.0-a-ecab7ceadc to /sysroot/etc/hostname Jan 29 11:58:15.161473 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:58:15.195762 systemd-networkd[868]: enP47252s1: Gained IPv6LL Jan 29 11:58:15.474811 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:58:15.527697 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:58:15.537301 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:58:15.560316 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:58:16.564801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:16.584891 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:58:16.596554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:58:16.626634 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:16.635458 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:58:16.660471 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:58:16.670920 ignition[1028]: INFO : Ignition 2.19.0 Jan 29 11:58:16.680308 ignition[1028]: INFO : Stage: mount Jan 29 11:58:16.680308 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:16.680308 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:16.680308 ignition[1028]: INFO : mount: mount passed Jan 29 11:58:16.680308 ignition[1028]: INFO : Ignition finished successfully Jan 29 11:58:16.680979 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:58:16.715809 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:58:16.740835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:16.783212 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1040) Jan 29 11:58:16.783261 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:58:16.791331 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:58:16.791628 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:58:16.805649 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:58:16.806785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:16.836955 ignition[1058]: INFO : Ignition 2.19.0 Jan 29 11:58:16.842512 ignition[1058]: INFO : Stage: files Jan 29 11:58:16.842512 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:16.842512 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:16.842512 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:58:16.887541 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:58:16.887541 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:58:16.952900 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:58:16.962213 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:58:16.962213 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:58:16.953332 unknown[1058]: wrote ssh authorized keys file for user: core Jan 29 11:58:16.988965 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:58:17.002731 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:58:17.058086 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:58:17.167186 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:58:17.167186 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.197023 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:58:17.662135 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:58:17.867798 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:58:17.867798 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:58:17.891010 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:17.905065 ignition[1058]: INFO : files: files passed Jan 29 11:58:17.905065 ignition[1058]: INFO : Ignition finished successfully Jan 29 11:58:17.893510 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:58:17.943398 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:58:17.960775 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:58:17.990380 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:58:18.059632 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:18.059632 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:17.990485 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:58:18.091706 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:18.025938 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:18.035421 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:58:18.083886 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:58:18.126918 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:58:18.128653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:58:18.146434 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:58:18.159686 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:58:18.173367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:58:18.185842 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:58:18.217112 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:18.238917 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:58:18.260066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:58:18.260159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:58:18.274807 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:18.289462 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:18.303671 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:58:18.316902 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:58:18.316975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:18.337487 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:58:18.351683 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:58:18.363448 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:58:18.376737 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:18.391032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:18.404421 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:58:18.417868 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:18.433732 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:58:18.449889 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:58:18.464002 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:58:18.475874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:58:18.475950 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:18.493668 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:18.500606 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:18.513970 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:58:18.520538 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:18.528441 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:58:18.528507 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:18.549929 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:58:18.549983 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:18.566411 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:58:18.566460 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:58:18.578050 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:58:18.578093 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:58:18.650186 ignition[1110]: INFO : Ignition 2.19.0 Jan 29 11:58:18.650186 ignition[1110]: INFO : Stage: umount Jan 29 11:58:18.650186 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:18.650186 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 11:58:18.650186 ignition[1110]: INFO : umount: umount passed Jan 29 11:58:18.650186 ignition[1110]: INFO : Ignition finished successfully Jan 29 11:58:18.612830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:58:18.631698 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:58:18.631784 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:18.644740 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:58:18.668912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:58:18.668991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:18.683730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:58:18.683770 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:18.703573 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:58:18.703676 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:58:18.715706 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:58:18.715799 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:58:18.727730 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:58:18.727777 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:58:18.740344 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:58:18.740397 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:58:18.753218 systemd[1]: Stopped target network.target - Network. Jan 29 11:58:18.758857 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:58:18.758921 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:18.773336 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:58:18.785700 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:58:18.792140 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:18.799202 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:58:18.805058 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:58:18.816993 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:58:18.817044 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:18.829333 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:58:18.829375 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:18.841729 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:58:18.841789 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:58:18.855014 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:58:18.855061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:18.868436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:58:18.881326 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:58:18.896871 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:58:18.901604 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 29 11:58:18.903316 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:58:18.903448 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:58:19.172873 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: Data path switched from VF: enP47252s1 Jan 29 11:58:18.918888 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:58:18.918999 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:58:18.939384 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:58:18.939458 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:18.971835 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:58:18.984501 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:58:18.984570 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:19.007668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:58:19.007722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:19.020906 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:58:19.020948 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:19.033206 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:58:19.033255 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:19.054462 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:19.075218 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:58:19.075311 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:58:19.099310 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:58:19.099437 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:19.119488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:58:19.119562 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:19.131469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:58:19.131503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:19.143216 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:58:19.143269 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:19.167436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:58:19.167498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:19.183422 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:19.416257 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 29 11:58:19.183475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:19.195865 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:58:19.195931 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:19.213820 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:58:19.227675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:58:19.227742 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:19.241876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:19.241932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:19.255970 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:58:19.256068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:58:19.270158 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:58:19.270253 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:58:19.281521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:58:19.312869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:58:19.335790 systemd[1]: Switching root. Jan 29 11:58:19.518666 systemd-journald[217]: Journal stopped Jan 29 11:58:24.287022 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:58:24.287050 kernel: SELinux: policy capability open_perms=1 Jan 29 11:58:24.287060 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:58:24.287068 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:58:24.287078 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:58:24.287086 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:58:24.287094 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:58:24.287103 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:58:24.287111 kernel: audit: type=1403 audit(1738151900.634:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:58:24.287121 systemd[1]: Successfully loaded SELinux policy in 109.724ms. Jan 29 11:58:24.287132 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.913ms. Jan 29 11:58:24.287142 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:24.287156 systemd[1]: Detected virtualization microsoft. Jan 29 11:58:24.287165 systemd[1]: Detected architecture arm64. Jan 29 11:58:24.287174 systemd[1]: Detected first boot. Jan 29 11:58:24.287186 systemd[1]: Hostname set to . Jan 29 11:58:24.287195 systemd[1]: Initializing machine ID from random generator. Jan 29 11:58:24.287204 zram_generator::config[1151]: No configuration found. Jan 29 11:58:24.287214 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:58:24.287223 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:58:24.287231 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:58:24.287241 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:58:24.287252 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:58:24.287261 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:58:24.287271 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:58:24.287280 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:58:24.287289 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:58:24.287299 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:58:24.287308 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:58:24.287319 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:58:24.287328 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:24.287337 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:24.287347 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:58:24.287357 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:58:24.287366 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:58:24.287376 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:24.287385 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:58:24.287396 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:24.287405 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:58:24.287415 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:58:24.287427 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:24.287436 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:58:24.287446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:24.287455 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:24.287465 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:24.287476 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:24.287485 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:58:24.287495 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:58:24.287504 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:24.287513 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:24.287524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:24.287535 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:58:24.287545 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:58:24.287554 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:58:24.287565 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:58:24.287574 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:58:24.287584 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:58:24.287593 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:58:24.287605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:58:24.287625 systemd[1]: Reached target machines.target - Containers. Jan 29 11:58:24.287636 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:58:24.287646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:24.287655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:24.287665 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:58:24.287674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:24.287684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:24.287696 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:24.287706 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:58:24.287716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:24.287726 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:58:24.287736 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:58:24.287745 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:58:24.287755 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:58:24.287764 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:58:24.287775 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:24.287786 kernel: loop: module loaded Jan 29 11:58:24.287795 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:24.287823 systemd-journald[1247]: Collecting audit messages is disabled. Jan 29 11:58:24.287845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:58:24.287856 systemd-journald[1247]: Journal started Jan 29 11:58:24.287876 systemd-journald[1247]: Runtime Journal (/run/log/journal/1f960309bb054f5699dc83388d32211c) is 8.0M, max 78.5M, 70.5M free. Jan 29 11:58:23.243287 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:58:23.367323 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:58:23.367682 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:58:23.367962 systemd[1]: systemd-journald.service: Consumed 3.737s CPU time. Jan 29 11:58:24.298708 kernel: fuse: init (API version 7.39) Jan 29 11:58:24.321742 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:58:24.321799 kernel: ACPI: bus type drm_connector registered Jan 29 11:58:24.355841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:24.355911 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:58:24.365554 systemd[1]: Stopped verity-setup.service. Jan 29 11:58:24.383647 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:24.384417 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:58:24.390872 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:58:24.397231 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:58:24.403198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:58:24.409751 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:58:24.416707 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:58:24.424678 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:58:24.433647 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:24.444125 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:58:24.444352 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:58:24.453237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:24.453462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:24.462223 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:24.462432 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:24.471214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:24.471431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:24.480256 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:58:24.480395 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:58:24.488214 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:24.488348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:24.496357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:24.504398 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:58:24.514641 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:58:24.522851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:24.538794 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:58:24.550698 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:58:24.558446 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:58:24.565521 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:58:24.565560 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:24.573041 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:58:24.586767 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:58:24.594681 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:58:24.602431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:24.621849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:58:24.629492 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:58:24.637325 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:24.638292 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:58:24.648065 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:24.649259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:24.667877 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:58:24.677104 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:58:24.685402 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:58:24.697231 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:58:24.710401 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:58:24.711477 systemd-journald[1247]: Time spent on flushing to /var/log/journal/1f960309bb054f5699dc83388d32211c is 27.237ms for 895 entries. Jan 29 11:58:24.711477 systemd-journald[1247]: System Journal (/var/log/journal/1f960309bb054f5699dc83388d32211c) is 8.0M, max 2.6G, 2.6G free. Jan 29 11:58:24.766772 systemd-journald[1247]: Received client request to flush runtime journal. Jan 29 11:58:24.725173 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:58:24.734734 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:58:24.746251 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:58:24.760946 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:58:24.769771 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:58:24.787824 udevadm[1288]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:58:24.792676 kernel: loop0: detected capacity change from 0 to 114328 Jan 29 11:58:24.793677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:25.335318 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:58:25.336010 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:58:25.496524 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:58:25.513948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:25.547456 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 29 11:58:25.547476 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 29 11:58:25.551315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:25.597641 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:58:25.752634 kernel: loop1: detected capacity change from 0 to 31320 Jan 29 11:58:26.382651 kernel: loop2: detected capacity change from 0 to 194096 Jan 29 11:58:26.441651 kernel: loop3: detected capacity change from 0 to 114432 Jan 29 11:58:27.673514 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:58:27.690821 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:27.712412 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Jan 29 11:58:27.734686 kernel: loop4: detected capacity change from 0 to 114328 Jan 29 11:58:27.745642 kernel: loop5: detected capacity change from 0 to 31320 Jan 29 11:58:27.758638 kernel: loop6: detected capacity change from 0 to 194096 Jan 29 11:58:27.779731 kernel: loop7: detected capacity change from 0 to 114432 Jan 29 11:58:27.787003 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 11:58:27.787406 (sd-merge)[1311]: Merged extensions into '/usr'. Jan 29 11:58:27.790909 systemd[1]: Reloading requested from client PID 1286 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:58:27.790927 systemd[1]: Reloading... Jan 29 11:58:27.878646 zram_generator::config[1337]: No configuration found. Jan 29 11:58:28.080702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:28.106174 kernel: hv_vmbus: registering driver hv_balloon Jan 29 11:58:28.106269 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 11:58:28.112560 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 29 11:58:28.118799 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:58:28.125794 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 11:58:28.138792 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 11:58:28.138901 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 11:58:28.146717 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:58:28.155714 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:58:28.165822 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:58:28.166204 systemd[1]: Reloading finished in 374 ms. Jan 29 11:58:28.199274 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1352) Jan 29 11:58:28.204146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:28.213657 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:58:28.279898 systemd[1]: Starting ensure-sysext.service... Jan 29 11:58:28.301900 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:28.309458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:28.330824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:28.334243 systemd-tmpfiles[1468]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:58:28.334514 systemd-tmpfiles[1468]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:58:28.335154 systemd-tmpfiles[1468]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:58:28.335357 systemd-tmpfiles[1468]: ACLs are not supported, ignoring. Jan 29 11:58:28.335401 systemd-tmpfiles[1468]: ACLs are not supported, ignoring. Jan 29 11:58:28.341841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 11:58:28.359913 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:58:28.367593 systemd[1]: Reloading requested from client PID 1465 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:58:28.367607 systemd[1]: Reloading... Jan 29 11:58:28.429670 zram_generator::config[1501]: No configuration found. Jan 29 11:58:28.530887 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:28.598888 systemd-tmpfiles[1468]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:28.598898 systemd-tmpfiles[1468]: Skipping /boot Jan 29 11:58:28.603661 systemd[1]: Reloading finished in 235 ms. Jan 29 11:58:28.607223 systemd-tmpfiles[1468]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:28.607307 systemd-tmpfiles[1468]: Skipping /boot Jan 29 11:58:28.620840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:28.621018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:28.634194 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:28.658964 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:58:28.668887 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:58:28.682720 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:58:28.698876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:28.705957 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:58:28.715737 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:58:28.729975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:28.742459 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:58:28.754351 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:58:28.767010 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:58:28.786947 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:58:28.807584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:28.819934 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:58:28.831777 augenrules[1591]: No rules Jan 29 11:58:28.838074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:28.846951 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:28.861029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:28.870094 lvm[1593]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:58:28.871999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:28.875666 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:58:28.883866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:28.885668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:28.893034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:28.894658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:28.906245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:28.908501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:28.917330 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:58:28.929690 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:58:28.942923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:28.949967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:28.954866 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:58:28.964924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:28.972253 lvm[1608]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:58:28.978219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:28.987602 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:28.995745 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:29.005195 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:58:29.013452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:29.013751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:29.021112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:29.021378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:29.031192 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:29.031448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:29.048798 systemd[1]: Finished ensure-sysext.service. Jan 29 11:58:29.054794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:29.066818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:29.078327 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:29.087844 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:29.097873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:29.105468 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:29.105544 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:58:29.112742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:29.113752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:29.122469 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:29.122660 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:29.130085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:29.130234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:29.139189 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:29.139344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:29.149712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:29.149783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:29.255573 systemd-resolved[1570]: Positive Trust Anchors: Jan 29 11:58:29.255588 systemd-resolved[1570]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:29.255952 systemd-resolved[1570]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:29.259678 systemd-networkd[1466]: lo: Link UP Jan 29 11:58:29.259686 systemd-networkd[1466]: lo: Gained carrier Jan 29 11:58:29.261532 systemd-networkd[1466]: Enumeration completed Jan 29 11:58:29.261687 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:29.261882 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:29.261885 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:29.276771 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:58:29.294969 systemd-resolved[1570]: Using system hostname 'ci-4081.3.0-a-ecab7ceadc'. Jan 29 11:58:29.328633 kernel: mlx5_core b894:00:02.0 enP47252s1: Link up Jan 29 11:58:29.360481 kernel: hv_netvsc 000d3ac3-937e-000d-3ac3-937e000d3ac3 eth0: Data path switched to VF: enP47252s1 Jan 29 11:58:29.360078 systemd-networkd[1466]: enP47252s1: Link UP Jan 29 11:58:29.360203 systemd-networkd[1466]: eth0: Link UP Jan 29 11:58:29.360206 systemd-networkd[1466]: eth0: Gained carrier Jan 29 11:58:29.360221 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:29.362301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:29.369871 systemd[1]: Reached target network.target - Network. Jan 29 11:58:29.377344 systemd-networkd[1466]: enP47252s1: Gained carrier Jan 29 11:58:29.378070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:29.396669 systemd-networkd[1466]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 11:58:29.723243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:30.046923 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:58:30.055056 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:58:31.067776 systemd-networkd[1466]: eth0: Gained IPv6LL Jan 29 11:58:31.070008 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:58:31.078351 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:58:31.259726 systemd-networkd[1466]: enP47252s1: Gained IPv6LL Jan 29 11:58:34.249100 ldconfig[1280]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:58:34.392444 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:58:34.405804 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:58:34.420703 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:58:34.429198 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:34.435818 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:58:34.443338 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:58:34.450875 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:58:34.458886 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:58:34.466167 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:58:34.473834 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:58:34.473877 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:34.479321 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:34.486146 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:58:34.494167 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:58:34.504559 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:58:34.511001 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:58:34.517086 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:34.522182 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:34.527296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:58:34.527352 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:58:34.539726 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 11:58:34.550806 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:58:34.563429 (chronyd)[1637]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 11:58:34.566475 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:58:34.573781 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:58:34.584863 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:58:34.594032 jq[1643]: false Jan 29 11:58:34.594821 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:58:34.598666 chronyd[1646]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 11:58:34.601533 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:58:34.601580 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 11:58:34.602672 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 11:58:34.608293 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 11:58:34.610315 KVP[1647]: KVP starting; pid is:1647 Jan 29 11:58:34.610747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:34.620689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:58:34.629722 kernel: hv_utils: KVP IC version 4.0 Jan 29 11:58:34.630667 KVP[1647]: KVP LIC Version: 3.1 Jan 29 11:58:34.632779 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:58:34.638017 chronyd[1646]: Timezone right/UTC failed leap second check, ignoring Jan 29 11:58:34.638467 chronyd[1646]: Loaded seccomp filter (level 2) Jan 29 11:58:34.641227 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:58:34.656761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:58:34.663738 extend-filesystems[1644]: Found loop4 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found loop5 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found loop6 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found loop7 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found sda Jan 29 11:58:34.663738 extend-filesystems[1644]: Found sda1 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found sda2 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found sda3 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found usr Jan 29 11:58:34.663738 extend-filesystems[1644]: Found sda4 Jan 29 11:58:34.663738 extend-filesystems[1644]: Found sda6 Jan 29 11:58:34.743716 extend-filesystems[1644]: Found sda7 Jan 29 11:58:34.743716 extend-filesystems[1644]: Found sda9 Jan 29 11:58:34.743716 extend-filesystems[1644]: Checking size of /dev/sda9 Jan 29 11:58:34.667794 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:58:34.694978 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:58:34.707991 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:58:34.708512 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:58:34.710784 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:58:34.743903 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:58:34.766904 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 11:58:34.785758 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:58:34.785913 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:58:34.786718 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:58:34.787052 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:58:34.795418 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:58:34.807633 jq[1664]: true Jan 29 11:58:34.814393 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:58:34.816784 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:58:34.825568 dbus-daemon[1640]: [system] SELinux support is enabled Jan 29 11:58:34.829202 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:58:34.864226 extend-filesystems[1644]: Old size kept for /dev/sda9 Jan 29 11:58:34.864226 extend-filesystems[1644]: Found sr0 Jan 29 11:58:34.917905 jq[1679]: true Jan 29 11:58:34.864241 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:58:34.920134 update_engine[1663]: I20250129 11:58:34.871243 1663 main.cc:92] Flatcar Update Engine starting Jan 29 11:58:34.920134 update_engine[1663]: I20250129 11:58:34.872468 1663 update_check_scheduler.cc:74] Next update check in 2m36s Jan 29 11:58:34.920412 coreos-metadata[1639]: Jan 29 11:58:34.907 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 11:58:34.920412 coreos-metadata[1639]: Jan 29 11:58:34.915 INFO Fetch successful Jan 29 11:58:34.920412 coreos-metadata[1639]: Jan 29 11:58:34.916 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 11:58:34.867168 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:58:34.902968 (ntainerd)[1682]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:58:34.905518 systemd-logind[1660]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 29 11:58:34.905732 systemd-logind[1660]: New seat seat0. Jan 29 11:58:34.913567 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:58:34.922239 coreos-metadata[1639]: Jan 29 11:58:34.921 INFO Fetch successful Jan 29 11:58:34.922239 coreos-metadata[1639]: Jan 29 11:58:34.921 INFO Fetching http://168.63.129.16/machine/1347fa46-a774-4436-8e2d-65518afc2e99/83106487%2Ddff0%2D4f8a%2Db409%2D3b6f77b863c5.%5Fci%2D4081.3.0%2Da%2Decab7ceadc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 11:58:34.923924 coreos-metadata[1639]: Jan 29 11:58:34.923 INFO Fetch successful Jan 29 11:58:34.923924 coreos-metadata[1639]: Jan 29 11:58:34.923 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 11:58:34.937941 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:58:34.942058 coreos-metadata[1639]: Jan 29 11:58:34.939 INFO Fetch successful Jan 29 11:58:34.942104 tar[1676]: linux-arm64/helm Jan 29 11:58:34.938061 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:58:34.949204 dbus-daemon[1640]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:58:34.955946 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:58:34.955964 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:58:34.973584 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:58:34.991889 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:58:35.018665 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:58:35.026597 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:58:35.092214 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1708) Jan 29 11:58:35.227861 sshd_keygen[1662]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:58:35.256671 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:58:35.270878 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:58:35.288912 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 11:58:35.297439 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:58:35.297738 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:58:35.312975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:58:35.345884 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 11:58:35.452563 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:58:35.473947 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:58:35.480919 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:58:35.488611 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:58:35.643126 tar[1676]: linux-arm64/LICENSE Jan 29 11:58:35.643254 tar[1676]: linux-arm64/README.md Jan 29 11:58:35.653792 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:58:35.861699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:35.868468 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:58:35.890274 locksmithd[1712]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:58:35.946925 bash[1727]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:58:35.948855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:58:35.957746 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:58:36.350244 kubelet[1787]: E0129 11:58:36.350098 1787 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:58:36.352193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:58:36.352329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:58:36.863426 containerd[1682]: time="2025-01-29T11:58:36.863340620Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:58:36.886586 containerd[1682]: time="2025-01-29T11:58:36.886507500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.887815900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.887846740Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.887861940Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888019540Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888034700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888090260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888101700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888253340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888267980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888280460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888642 containerd[1682]: time="2025-01-29T11:58:36.888290540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888878 containerd[1682]: time="2025-01-29T11:58:36.888351020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888878 containerd[1682]: time="2025-01-29T11:58:36.888523780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888878 containerd[1682]: time="2025-01-29T11:58:36.888609780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:36.888878 containerd[1682]: time="2025-01-29T11:58:36.888639460Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:58:36.888878 containerd[1682]: time="2025-01-29T11:58:36.888716140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:58:36.888878 containerd[1682]: time="2025-01-29T11:58:36.888751820Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:58:37.146699 containerd[1682]: time="2025-01-29T11:58:37.146599980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:58:37.146699 containerd[1682]: time="2025-01-29T11:58:37.146669060Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:58:37.146699 containerd[1682]: time="2025-01-29T11:58:37.146685620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:58:37.146699 containerd[1682]: time="2025-01-29T11:58:37.146700820Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:58:37.146852 containerd[1682]: time="2025-01-29T11:58:37.146714460Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:58:37.146871 containerd[1682]: time="2025-01-29T11:58:37.146861860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:58:37.147101 containerd[1682]: time="2025-01-29T11:58:37.147078940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:58:37.147214 containerd[1682]: time="2025-01-29T11:58:37.147192580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:58:37.147214 containerd[1682]: time="2025-01-29T11:58:37.147216620Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:58:37.147270 containerd[1682]: time="2025-01-29T11:58:37.147230020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:58:37.147270 containerd[1682]: time="2025-01-29T11:58:37.147244780Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147270 containerd[1682]: time="2025-01-29T11:58:37.147258340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147321 containerd[1682]: time="2025-01-29T11:58:37.147270580Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147321 containerd[1682]: time="2025-01-29T11:58:37.147284300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147321 containerd[1682]: time="2025-01-29T11:58:37.147299140Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147321 containerd[1682]: time="2025-01-29T11:58:37.147311020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147389 containerd[1682]: time="2025-01-29T11:58:37.147323180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147389 containerd[1682]: time="2025-01-29T11:58:37.147335380Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:58:37.147389 containerd[1682]: time="2025-01-29T11:58:37.147376900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147443 containerd[1682]: time="2025-01-29T11:58:37.147390980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147443 containerd[1682]: time="2025-01-29T11:58:37.147404620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147443 containerd[1682]: time="2025-01-29T11:58:37.147417260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147443 containerd[1682]: time="2025-01-29T11:58:37.147429140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147517 containerd[1682]: time="2025-01-29T11:58:37.147441940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147517 containerd[1682]: time="2025-01-29T11:58:37.147455060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147517 containerd[1682]: time="2025-01-29T11:58:37.147469420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147517 containerd[1682]: time="2025-01-29T11:58:37.147481940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147517 containerd[1682]: time="2025-01-29T11:58:37.147496780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147517 containerd[1682]: time="2025-01-29T11:58:37.147508380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147640 containerd[1682]: time="2025-01-29T11:58:37.147526220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147640 containerd[1682]: time="2025-01-29T11:58:37.147538860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147640 containerd[1682]: time="2025-01-29T11:58:37.147558860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:58:37.147640 containerd[1682]: time="2025-01-29T11:58:37.147579340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147640 containerd[1682]: time="2025-01-29T11:58:37.147591420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.147640 containerd[1682]: time="2025-01-29T11:58:37.147605780Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:58:37.148208 containerd[1682]: time="2025-01-29T11:58:37.148182620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:58:37.148334 containerd[1682]: time="2025-01-29T11:58:37.148216900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:58:37.148334 containerd[1682]: time="2025-01-29T11:58:37.148228900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:58:37.148334 containerd[1682]: time="2025-01-29T11:58:37.148257060Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:58:37.148334 containerd[1682]: time="2025-01-29T11:58:37.148266700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.148334 containerd[1682]: time="2025-01-29T11:58:37.148279180Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:58:37.148334 containerd[1682]: time="2025-01-29T11:58:37.148289220Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:58:37.148334 containerd[1682]: time="2025-01-29T11:58:37.148298980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:58:37.149542 containerd[1682]: time="2025-01-29T11:58:37.148584140Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:58:37.149542 containerd[1682]: time="2025-01-29T11:58:37.148672460Z" level=info msg="Connect containerd service" Jan 29 11:58:37.149542 containerd[1682]: time="2025-01-29T11:58:37.148856740Z" level=info msg="using legacy CRI server" Jan 29 11:58:37.149542 containerd[1682]: time="2025-01-29T11:58:37.148870820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:58:37.149542 containerd[1682]: time="2025-01-29T11:58:37.148969540Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:58:37.150855 containerd[1682]: time="2025-01-29T11:58:37.150752060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:58:37.151115 containerd[1682]: time="2025-01-29T11:58:37.151062300Z" level=info msg="Start subscribing containerd event" Jan 29 11:58:37.151155 containerd[1682]: time="2025-01-29T11:58:37.151128940Z" level=info msg="Start recovering state" Jan 29 11:58:37.151340 containerd[1682]: time="2025-01-29T11:58:37.151319980Z" level=info msg="Start event monitor" Jan 29 11:58:37.151373 containerd[1682]: time="2025-01-29T11:58:37.151340500Z" level=info msg="Start snapshots syncer" Jan 29 11:58:37.151373 containerd[1682]: time="2025-01-29T11:58:37.151349620Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:58:37.151373 containerd[1682]: time="2025-01-29T11:58:37.151363300Z" level=info msg="Start streaming server" Jan 29 11:58:37.151663 containerd[1682]: time="2025-01-29T11:58:37.151641860Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:58:37.151699 containerd[1682]: time="2025-01-29T11:58:37.151689580Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:58:37.157305 containerd[1682]: time="2025-01-29T11:58:37.151744820Z" level=info msg="containerd successfully booted in 0.289262s" Jan 29 11:58:37.151842 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:58:37.158815 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:58:37.169721 systemd[1]: Startup finished in 724ms (kernel) + 12.445s (initrd) + 16.643s (userspace) = 29.813s. Jan 29 11:58:37.351769 login[1773]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:37.354382 login[1774]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:37.364340 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:58:37.364928 systemd-logind[1660]: New session 2 of user core. Jan 29 11:58:37.372968 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:58:37.375544 systemd-logind[1660]: New session 1 of user core. Jan 29 11:58:37.383321 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:58:37.391194 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:58:37.394343 (systemd)[1811]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:58:37.504312 systemd[1811]: Queued start job for default target default.target. Jan 29 11:58:37.514817 systemd[1811]: Created slice app.slice - User Application Slice. Jan 29 11:58:37.514847 systemd[1811]: Reached target paths.target - Paths. Jan 29 11:58:37.514858 systemd[1811]: Reached target timers.target - Timers. Jan 29 11:58:37.515982 systemd[1811]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:58:37.525802 systemd[1811]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:58:37.525861 systemd[1811]: Reached target sockets.target - Sockets. Jan 29 11:58:37.525873 systemd[1811]: Reached target basic.target - Basic System. Jan 29 11:58:37.525991 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:58:37.526919 systemd[1811]: Reached target default.target - Main User Target. Jan 29 11:58:37.526961 systemd[1811]: Startup finished in 127ms. Jan 29 11:58:37.527061 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:58:37.527732 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:58:38.535702 waagent[1771]: 2025-01-29T11:58:38.535594Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 11:58:38.542532 waagent[1771]: 2025-01-29T11:58:38.542465Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 29 11:58:38.547732 waagent[1771]: 2025-01-29T11:58:38.547687Z INFO Daemon Daemon Python: 3.11.9 Jan 29 11:58:38.552979 waagent[1771]: 2025-01-29T11:58:38.552904Z INFO Daemon Daemon Run daemon Jan 29 11:58:38.558237 waagent[1771]: 2025-01-29T11:58:38.558193Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 29 11:58:38.567942 waagent[1771]: 2025-01-29T11:58:38.567783Z INFO Daemon Daemon Using waagent for provisioning Jan 29 11:58:38.574334 waagent[1771]: 2025-01-29T11:58:38.574291Z INFO Daemon Daemon Activate resource disk Jan 29 11:58:38.579450 waagent[1771]: 2025-01-29T11:58:38.579406Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 11:58:38.591403 waagent[1771]: 2025-01-29T11:58:38.591355Z INFO Daemon Daemon Found device: None Jan 29 11:58:38.596315 waagent[1771]: 2025-01-29T11:58:38.596273Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 11:58:38.604857 waagent[1771]: 2025-01-29T11:58:38.604816Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 11:58:38.617708 waagent[1771]: 2025-01-29T11:58:38.617658Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 11:58:38.623992 waagent[1771]: 2025-01-29T11:58:38.623950Z INFO Daemon Daemon Running default provisioning handler Jan 29 11:58:38.636018 waagent[1771]: 2025-01-29T11:58:38.635485Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 11:58:38.650396 waagent[1771]: 2025-01-29T11:58:38.650334Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 11:58:38.660104 waagent[1771]: 2025-01-29T11:58:38.660054Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 11:58:38.665437 waagent[1771]: 2025-01-29T11:58:38.665388Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 11:58:38.856890 waagent[1771]: 2025-01-29T11:58:38.856710Z INFO Daemon Daemon Successfully mounted dvd Jan 29 11:58:38.876653 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 11:58:38.878569 waagent[1771]: 2025-01-29T11:58:38.878499Z INFO Daemon Daemon Detect protocol endpoint Jan 29 11:58:38.883679 waagent[1771]: 2025-01-29T11:58:38.883601Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 11:58:38.889588 waagent[1771]: 2025-01-29T11:58:38.889546Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 11:58:38.896154 waagent[1771]: 2025-01-29T11:58:38.896114Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 11:58:38.902060 waagent[1771]: 2025-01-29T11:58:38.902019Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 11:58:38.907294 waagent[1771]: 2025-01-29T11:58:38.907253Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 11:58:38.953564 waagent[1771]: 2025-01-29T11:58:38.953517Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 11:58:38.961087 waagent[1771]: 2025-01-29T11:58:38.961057Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 11:58:38.967086 waagent[1771]: 2025-01-29T11:58:38.967047Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 11:58:39.215708 waagent[1771]: 2025-01-29T11:58:39.215038Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 11:58:39.222103 waagent[1771]: 2025-01-29T11:58:39.222042Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 11:58:39.231118 waagent[1771]: 2025-01-29T11:58:39.231070Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 11:58:39.254343 waagent[1771]: 2025-01-29T11:58:39.254297Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 11:58:39.260062 waagent[1771]: 2025-01-29T11:58:39.260009Z INFO Daemon Jan 29 11:58:39.263104 waagent[1771]: 2025-01-29T11:58:39.263063Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6b20ddce-74ff-4cd3-b49e-f1333a2eaf5d eTag: 12166146051877904781 source: Fabric] Jan 29 11:58:39.274850 waagent[1771]: 2025-01-29T11:58:39.274802Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 11:58:39.282025 waagent[1771]: 2025-01-29T11:58:39.281981Z INFO Daemon Jan 29 11:58:39.284994 waagent[1771]: 2025-01-29T11:58:39.284952Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 11:58:39.300134 waagent[1771]: 2025-01-29T11:58:39.300098Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 11:58:39.382706 waagent[1771]: 2025-01-29T11:58:39.382638Z INFO Daemon Downloaded certificate {'thumbprint': 'EA0739552B4198C974F62E00709B1828966BCADF', 'hasPrivateKey': True} Jan 29 11:58:39.393111 waagent[1771]: 2025-01-29T11:58:39.393066Z INFO Daemon Downloaded certificate {'thumbprint': '6373C9003919972D30F3ED8415DE0A098B4F1476', 'hasPrivateKey': False} Jan 29 11:58:39.402786 waagent[1771]: 2025-01-29T11:58:39.402739Z INFO Daemon Fetch goal state completed Jan 29 11:58:39.413884 waagent[1771]: 2025-01-29T11:58:39.413841Z INFO Daemon Daemon Starting provisioning Jan 29 11:58:39.418844 waagent[1771]: 2025-01-29T11:58:39.418798Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 11:58:39.424957 waagent[1771]: 2025-01-29T11:58:39.424896Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-ecab7ceadc] Jan 29 11:58:39.446889 waagent[1771]: 2025-01-29T11:58:39.446828Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-ecab7ceadc] Jan 29 11:58:39.453543 waagent[1771]: 2025-01-29T11:58:39.453486Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 11:58:39.459791 waagent[1771]: 2025-01-29T11:58:39.459746Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 11:58:39.524150 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:39.524963 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:39.525262 waagent[1771]: 2025-01-29T11:58:39.525020Z INFO Daemon Daemon Create user account if not exists Jan 29 11:58:39.524992 systemd-networkd[1466]: eth0: DHCP lease lost Jan 29 11:58:39.531766 waagent[1771]: 2025-01-29T11:58:39.531571Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 11:58:39.537719 waagent[1771]: 2025-01-29T11:58:39.537661Z INFO Daemon Daemon Configure sudoer Jan 29 11:58:39.542384 waagent[1771]: 2025-01-29T11:58:39.542325Z INFO Daemon Daemon Configure sshd Jan 29 11:58:39.547494 waagent[1771]: 2025-01-29T11:58:39.547441Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 11:58:39.562113 waagent[1771]: 2025-01-29T11:58:39.562041Z INFO Daemon Daemon Deploy ssh public key. Jan 29 11:58:39.568737 systemd-networkd[1466]: eth0: DHCPv6 lease lost Jan 29 11:58:39.586674 systemd-networkd[1466]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 11:58:40.663640 waagent[1771]: 2025-01-29T11:58:40.659688Z INFO Daemon Daemon Provisioning complete Jan 29 11:58:41.002234 waagent[1771]: 2025-01-29T11:58:41.002126Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 11:58:41.012669 waagent[1771]: 2025-01-29T11:58:41.012586Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 11:58:41.028499 waagent[1771]: 2025-01-29T11:58:41.028440Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 11:58:41.155159 waagent[1868]: 2025-01-29T11:58:41.154722Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 11:58:41.155159 waagent[1868]: 2025-01-29T11:58:41.154873Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 29 11:58:41.155159 waagent[1868]: 2025-01-29T11:58:41.154926Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 29 11:58:41.177650 waagent[1868]: 2025-01-29T11:58:41.176870Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 11:58:41.177650 waagent[1868]: 2025-01-29T11:58:41.177099Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 11:58:41.177650 waagent[1868]: 2025-01-29T11:58:41.177158Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 11:58:41.185326 waagent[1868]: 2025-01-29T11:58:41.185264Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 11:58:41.191877 waagent[1868]: 2025-01-29T11:58:41.191828Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 11:58:41.192373 waagent[1868]: 2025-01-29T11:58:41.192328Z INFO ExtHandler Jan 29 11:58:41.192442 waagent[1868]: 2025-01-29T11:58:41.192412Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8b35028e-6b92-4665-af53-aaf50cae303d eTag: 12166146051877904781 source: Fabric] Jan 29 11:58:41.192762 waagent[1868]: 2025-01-29T11:58:41.192719Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 11:58:41.193322 waagent[1868]: 2025-01-29T11:58:41.193277Z INFO ExtHandler Jan 29 11:58:41.193383 waagent[1868]: 2025-01-29T11:58:41.193355Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 11:58:41.197444 waagent[1868]: 2025-01-29T11:58:41.197407Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 11:58:41.293176 waagent[1868]: 2025-01-29T11:58:41.293038Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EA0739552B4198C974F62E00709B1828966BCADF', 'hasPrivateKey': True} Jan 29 11:58:41.293525 waagent[1868]: 2025-01-29T11:58:41.293479Z INFO ExtHandler Downloaded certificate {'thumbprint': '6373C9003919972D30F3ED8415DE0A098B4F1476', 'hasPrivateKey': False} Jan 29 11:58:41.293970 waagent[1868]: 2025-01-29T11:58:41.293926Z INFO ExtHandler Fetch goal state completed Jan 29 11:58:41.312037 waagent[1868]: 2025-01-29T11:58:41.311984Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1868 Jan 29 11:58:41.312187 waagent[1868]: 2025-01-29T11:58:41.312150Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 11:58:41.313787 waagent[1868]: 2025-01-29T11:58:41.313743Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 11:58:41.314157 waagent[1868]: 2025-01-29T11:58:41.314119Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 11:58:41.360264 waagent[1868]: 2025-01-29T11:58:41.360215Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 11:58:41.360461 waagent[1868]: 2025-01-29T11:58:41.360422Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 11:58:41.366721 waagent[1868]: 2025-01-29T11:58:41.366256Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 11:58:41.372573 systemd[1]: Reloading requested from client PID 1883 ('systemctl') (unit waagent.service)... Jan 29 11:58:41.372586 systemd[1]: Reloading... Jan 29 11:58:41.443679 zram_generator::config[1915]: No configuration found. Jan 29 11:58:41.545730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:41.622490 systemd[1]: Reloading finished in 249 ms. Jan 29 11:58:41.648349 waagent[1868]: 2025-01-29T11:58:41.644781Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 11:58:41.649990 systemd[1]: Reloading requested from client PID 1971 ('systemctl') (unit waagent.service)... Jan 29 11:58:41.650002 systemd[1]: Reloading... Jan 29 11:58:41.725649 zram_generator::config[2003]: No configuration found. Jan 29 11:58:41.822473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:41.899591 systemd[1]: Reloading finished in 249 ms. Jan 29 11:58:41.925673 waagent[1868]: 2025-01-29T11:58:41.922958Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 11:58:41.925673 waagent[1868]: 2025-01-29T11:58:41.923169Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 11:58:42.272633 waagent[1868]: 2025-01-29T11:58:42.272489Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 11:58:42.273160 waagent[1868]: 2025-01-29T11:58:42.273100Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 11:58:42.276092 waagent[1868]: 2025-01-29T11:58:42.275999Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 11:58:42.276504 waagent[1868]: 2025-01-29T11:58:42.276407Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 11:58:42.277109 waagent[1868]: 2025-01-29T11:58:42.276990Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 11:58:42.277239 waagent[1868]: 2025-01-29T11:58:42.277101Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 11:58:42.278153 waagent[1868]: 2025-01-29T11:58:42.277386Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 11:58:42.278153 waagent[1868]: 2025-01-29T11:58:42.277485Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 11:58:42.278153 waagent[1868]: 2025-01-29T11:58:42.277697Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 11:58:42.278153 waagent[1868]: 2025-01-29T11:58:42.277887Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 11:58:42.278153 waagent[1868]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 11:58:42.278153 waagent[1868]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 11:58:42.278153 waagent[1868]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 11:58:42.278153 waagent[1868]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 11:58:42.278153 waagent[1868]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 11:58:42.278153 waagent[1868]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 11:58:42.278523 waagent[1868]: 2025-01-29T11:58:42.278456Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 11:58:42.278668 waagent[1868]: 2025-01-29T11:58:42.278611Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 11:58:42.278833 waagent[1868]: 2025-01-29T11:58:42.278788Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 11:58:42.279399 waagent[1868]: 2025-01-29T11:58:42.278970Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 11:58:42.280263 waagent[1868]: 2025-01-29T11:58:42.280229Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 11:58:42.281029 waagent[1868]: 2025-01-29T11:58:42.280972Z INFO EnvHandler ExtHandler Configure routes Jan 29 11:58:42.281429 waagent[1868]: 2025-01-29T11:58:42.281386Z INFO EnvHandler ExtHandler Gateway:None Jan 29 11:58:42.281784 waagent[1868]: 2025-01-29T11:58:42.281741Z INFO EnvHandler ExtHandler Routes:None Jan 29 11:58:42.289426 waagent[1868]: 2025-01-29T11:58:42.289377Z INFO ExtHandler ExtHandler Jan 29 11:58:42.289814 waagent[1868]: 2025-01-29T11:58:42.289767Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dea45a18-d8f4-480b-869b-84158ab88635 correlation 73361e60-653f-4b49-b056-484892713323 created: 2025-01-29T11:57:17.275978Z] Jan 29 11:58:42.290263 waagent[1868]: 2025-01-29T11:58:42.290224Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 11:58:42.291637 waagent[1868]: 2025-01-29T11:58:42.290994Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 29 11:58:42.326320 waagent[1868]: 2025-01-29T11:58:42.326270Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 707ACDC1-D5C3-4139-ADA2-51BDC599D58F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 11:58:42.363946 waagent[1868]: 2025-01-29T11:58:42.363884Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 11:58:42.363946 waagent[1868]: Executing ['ip', '-a', '-o', 'link']: Jan 29 11:58:42.363946 waagent[1868]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 11:58:42.363946 waagent[1868]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:93:7e brd ff:ff:ff:ff:ff:ff Jan 29 11:58:42.363946 waagent[1868]: 3: enP47252s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:93:7e brd ff:ff:ff:ff:ff:ff\ altname enP47252p0s2 Jan 29 11:58:42.363946 waagent[1868]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 11:58:42.363946 waagent[1868]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 11:58:42.363946 waagent[1868]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 11:58:42.363946 waagent[1868]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 11:58:42.363946 waagent[1868]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 11:58:42.363946 waagent[1868]: 2: eth0 inet6 fe80::20d:3aff:fec3:937e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 11:58:42.363946 waagent[1868]: 3: enP47252s1 inet6 fe80::20d:3aff:fec3:937e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 11:58:42.743922 waagent[1868]: 2025-01-29T11:58:42.743783Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 11:58:42.743922 waagent[1868]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 11:58:42.743922 waagent[1868]: pkts bytes target prot opt in out source destination Jan 29 11:58:42.743922 waagent[1868]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 11:58:42.743922 waagent[1868]: pkts bytes target prot opt in out source destination Jan 29 11:58:42.743922 waagent[1868]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 11:58:42.743922 waagent[1868]: pkts bytes target prot opt in out source destination Jan 29 11:58:42.743922 waagent[1868]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 11:58:42.743922 waagent[1868]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 11:58:42.743922 waagent[1868]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 11:58:42.746859 waagent[1868]: 2025-01-29T11:58:42.746793Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 11:58:42.746859 waagent[1868]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 11:58:42.746859 waagent[1868]: pkts bytes target prot opt in out source destination Jan 29 11:58:42.746859 waagent[1868]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 11:58:42.746859 waagent[1868]: pkts bytes target prot opt in out source destination Jan 29 11:58:42.746859 waagent[1868]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 11:58:42.746859 waagent[1868]: pkts bytes target prot opt in out source destination Jan 29 11:58:42.746859 waagent[1868]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 11:58:42.746859 waagent[1868]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 11:58:42.746859 waagent[1868]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 11:58:42.747094 waagent[1868]: 2025-01-29T11:58:42.747060Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 11:58:46.603047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:58:46.611770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:46.694804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:46.697180 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:58:46.765140 kubelet[2101]: E0129 11:58:46.765100 2101 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:58:46.768415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:58:46.768691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:58:56.851977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:58:56.861879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:56.944459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:56.948455 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:58:57.008226 kubelet[2117]: E0129 11:58:57.008172 2117 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:58:57.010281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:58:57.010405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:58:58.431767 chronyd[1646]: Selected source PHC0 Jan 29 11:59:07.101978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:59:07.111886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:07.204534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:07.208809 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:07.256016 kubelet[2133]: E0129 11:59:07.255961 2133 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:07.258405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:07.258751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:16.234069 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 29 11:59:17.352031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:59:17.361817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:17.461472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:17.471902 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:17.511472 kubelet[2149]: E0129 11:59:17.511417 2149 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:17.514153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:17.514386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:20.398543 update_engine[1663]: I20250129 11:59:20.398452 1663 update_attempter.cc:509] Updating boot flags... Jan 29 11:59:20.463683 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2169) Jan 29 11:59:20.543949 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2174) Jan 29 11:59:27.601948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 11:59:27.613804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:27.710726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:27.721899 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:27.762838 kubelet[2231]: E0129 11:59:27.762746 2231 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:27.765238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:27.765393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:37.851930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 11:59:37.862816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:38.016454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:38.021267 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:38.060740 kubelet[2247]: E0129 11:59:38.060680 2247 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:38.063470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:38.063642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:48.101984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 11:59:48.110818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:48.318971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:48.323504 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:48.361692 kubelet[2263]: E0129 11:59:48.361567 2263 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:48.364384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:48.364669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:54.243601 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:59:54.248920 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:33564.service - OpenSSH per-connection server daemon (10.200.16.10:33564). Jan 29 11:59:54.809714 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 33564 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 11:59:54.811069 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:54.815055 systemd-logind[1660]: New session 3 of user core. Jan 29 11:59:54.822878 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:59:55.206987 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:33572.service - OpenSSH per-connection server daemon (10.200.16.10:33572). Jan 29 11:59:55.639974 sshd[2277]: Accepted publickey for core from 10.200.16.10 port 33572 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 11:59:55.641352 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:55.646329 systemd-logind[1660]: New session 4 of user core. Jan 29 11:59:55.651812 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:59:55.956925 sshd[2277]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:55.960696 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:33572.service: Deactivated successfully. Jan 29 11:59:55.963507 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:59:55.964457 systemd-logind[1660]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:59:55.965596 systemd-logind[1660]: Removed session 4. Jan 29 11:59:56.045018 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:46682.service - OpenSSH per-connection server daemon (10.200.16.10:46682). Jan 29 11:59:56.469482 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 46682 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 11:59:56.470872 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:56.474602 systemd-logind[1660]: New session 5 of user core. Jan 29 11:59:56.482790 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:59:56.779269 sshd[2284]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:56.783101 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:46682.service: Deactivated successfully. Jan 29 11:59:56.784749 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:59:56.785860 systemd-logind[1660]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:59:56.787044 systemd-logind[1660]: Removed session 5. Jan 29 11:59:56.859870 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:46696.service - OpenSSH per-connection server daemon (10.200.16.10:46696). Jan 29 11:59:57.282968 sshd[2291]: Accepted publickey for core from 10.200.16.10 port 46696 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 11:59:57.284242 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:57.288082 systemd-logind[1660]: New session 6 of user core. Jan 29 11:59:57.295789 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:59:57.606268 sshd[2291]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:57.609604 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:46696.service: Deactivated successfully. Jan 29 11:59:57.611222 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:59:57.612832 systemd-logind[1660]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:59:57.613800 systemd-logind[1660]: Removed session 6. Jan 29 11:59:57.683165 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:46710.service - OpenSSH per-connection server daemon (10.200.16.10:46710). Jan 29 11:59:58.110668 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 46710 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 11:59:58.112028 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:58.115806 systemd-logind[1660]: New session 7 of user core. Jan 29 11:59:58.123786 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:59:58.468328 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:59:58.468600 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:59:58.469659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 11:59:58.474904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:59:58.496946 sudo[2301]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:58.566862 sshd[2298]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:58.570917 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:46710.service: Deactivated successfully. Jan 29 11:59:58.572546 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:59:58.573857 systemd-logind[1660]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:59:58.574791 systemd-logind[1660]: Removed session 7. Jan 29 11:59:58.653168 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:46726.service - OpenSSH per-connection server daemon (10.200.16.10:46726). Jan 29 11:59:58.789874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:59:58.801052 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:59:58.838435 kubelet[2316]: E0129 11:59:58.838380 2316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:59:58.842054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:59:58.842322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:59:59.074341 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 46726 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 11:59:59.075676 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:59.080142 systemd-logind[1660]: New session 8 of user core. Jan 29 11:59:59.085776 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:59:59.319655 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:59:59.320206 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:59:59.323554 sudo[2326]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:59.328512 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:59:59.329003 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:59:59.343897 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:59:59.345508 auditctl[2329]: No rules Jan 29 11:59:59.345837 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:59:59.346004 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:59:59.348784 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:59:59.383477 augenrules[2347]: No rules Jan 29 11:59:59.384855 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:59:59.386345 sudo[2325]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:59.465652 sshd[2309]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:59.468519 systemd-logind[1660]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:59:59.470309 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:46726.service: Deactivated successfully. Jan 29 11:59:59.473385 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:59:59.474742 systemd-logind[1660]: Removed session 8. Jan 29 11:59:59.544050 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:46730.service - OpenSSH per-connection server daemon (10.200.16.10:46730). Jan 29 11:59:59.969075 sshd[2355]: Accepted publickey for core from 10.200.16.10 port 46730 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 11:59:59.970358 sshd[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:59.973974 systemd-logind[1660]: New session 9 of user core. Jan 29 11:59:59.989775 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:00:00.214294 sudo[2358]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:00:00.214558 sudo[2358]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:00:01.290032 (dockerd)[2373]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:00:01.290447 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:00:02.076792 dockerd[2373]: time="2025-01-29T12:00:02.076734447Z" level=info msg="Starting up" Jan 29 12:00:02.530730 dockerd[2373]: time="2025-01-29T12:00:02.530650829Z" level=info msg="Loading containers: start." Jan 29 12:00:02.667738 kernel: Initializing XFRM netlink socket Jan 29 12:00:02.819288 systemd-networkd[1466]: docker0: Link UP Jan 29 12:00:02.845220 dockerd[2373]: time="2025-01-29T12:00:02.844967773Z" level=info msg="Loading containers: done." Jan 29 12:00:02.868003 dockerd[2373]: time="2025-01-29T12:00:02.867923435Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:00:02.868221 dockerd[2373]: time="2025-01-29T12:00:02.868071355Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:00:02.868221 dockerd[2373]: time="2025-01-29T12:00:02.868203595Z" level=info msg="Daemon has completed initialization" Jan 29 12:00:02.921020 dockerd[2373]: time="2025-01-29T12:00:02.920961206Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:00:02.922056 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:00:04.755945 containerd[1682]: time="2025-01-29T12:00:04.755826343Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:00:05.670546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525130783.mount: Deactivated successfully. Jan 29 12:00:06.894441 containerd[1682]: time="2025-01-29T12:00:06.894391294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:06.897850 containerd[1682]: time="2025-01-29T12:00:06.897798497Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 29 12:00:06.901033 containerd[1682]: time="2025-01-29T12:00:06.900976740Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:06.905449 containerd[1682]: time="2025-01-29T12:00:06.905392624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:06.906605 containerd[1682]: time="2025-01-29T12:00:06.906425305Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.150551962s" Jan 29 12:00:06.906605 containerd[1682]: time="2025-01-29T12:00:06.906465465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 12:00:06.928714 containerd[1682]: time="2025-01-29T12:00:06.928680727Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:00:08.249674 containerd[1682]: time="2025-01-29T12:00:08.249206405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:08.253267 containerd[1682]: time="2025-01-29T12:00:08.253215409Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 29 12:00:08.258322 containerd[1682]: time="2025-01-29T12:00:08.258277614Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:08.264697 containerd[1682]: time="2025-01-29T12:00:08.264599660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:08.265727 containerd[1682]: time="2025-01-29T12:00:08.265688701Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.336773414s" Jan 29 12:00:08.265792 containerd[1682]: time="2025-01-29T12:00:08.265727461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 12:00:08.285467 containerd[1682]: time="2025-01-29T12:00:08.285407800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:00:08.851801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 12:00:08.858921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:08.966037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:08.976912 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:09.017974 kubelet[2589]: E0129 12:00:09.017917 2589 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:09.019887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:09.020018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:09.632307 containerd[1682]: time="2025-01-29T12:00:09.632249344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:09.634718 containerd[1682]: time="2025-01-29T12:00:09.634493787Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 29 12:00:09.638601 containerd[1682]: time="2025-01-29T12:00:09.638552590Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:09.644069 containerd[1682]: time="2025-01-29T12:00:09.643995556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:09.645244 containerd[1682]: time="2025-01-29T12:00:09.645106597Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.359658037s" Jan 29 12:00:09.645244 containerd[1682]: time="2025-01-29T12:00:09.645147917Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 12:00:09.666315 containerd[1682]: time="2025-01-29T12:00:09.666266897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:00:11.478359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325348950.mount: Deactivated successfully. Jan 29 12:00:11.827730 containerd[1682]: time="2025-01-29T12:00:11.827332441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:11.829993 containerd[1682]: time="2025-01-29T12:00:11.829544604Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 29 12:00:11.835227 containerd[1682]: time="2025-01-29T12:00:11.834496930Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:11.839567 containerd[1682]: time="2025-01-29T12:00:11.839499416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:11.840417 containerd[1682]: time="2025-01-29T12:00:11.840241737Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 2.17393368s" Jan 29 12:00:11.840417 containerd[1682]: time="2025-01-29T12:00:11.840284817Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 12:00:11.864948 containerd[1682]: time="2025-01-29T12:00:11.864912649Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:00:12.658672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652766537.mount: Deactivated successfully. Jan 29 12:00:13.667106 containerd[1682]: time="2025-01-29T12:00:13.667049687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:13.670707 containerd[1682]: time="2025-01-29T12:00:13.670457051Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 29 12:00:13.674318 containerd[1682]: time="2025-01-29T12:00:13.674267816Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:13.679722 containerd[1682]: time="2025-01-29T12:00:13.679663663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:13.681205 containerd[1682]: time="2025-01-29T12:00:13.680882665Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.815759296s" Jan 29 12:00:13.681205 containerd[1682]: time="2025-01-29T12:00:13.680921625Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 12:00:13.701689 containerd[1682]: time="2025-01-29T12:00:13.701446691Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:00:14.305803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667716263.mount: Deactivated successfully. Jan 29 12:00:14.329118 containerd[1682]: time="2025-01-29T12:00:14.329055418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:14.331587 containerd[1682]: time="2025-01-29T12:00:14.331333781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 29 12:00:14.336452 containerd[1682]: time="2025-01-29T12:00:14.336402628Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:14.341079 containerd[1682]: time="2025-01-29T12:00:14.341020954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:14.341876 containerd[1682]: time="2025-01-29T12:00:14.341732835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 640.246824ms" Jan 29 12:00:14.341876 containerd[1682]: time="2025-01-29T12:00:14.341769875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 12:00:14.364508 containerd[1682]: time="2025-01-29T12:00:14.364273144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:00:15.014101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172532652.mount: Deactivated successfully. Jan 29 12:00:17.402172 containerd[1682]: time="2025-01-29T12:00:17.402102949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:17.404884 containerd[1682]: time="2025-01-29T12:00:17.404833472Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 29 12:00:17.407733 containerd[1682]: time="2025-01-29T12:00:17.407665834Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:17.412964 containerd[1682]: time="2025-01-29T12:00:17.412888759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:17.414971 containerd[1682]: time="2025-01-29T12:00:17.414077720Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.049762176s" Jan 29 12:00:17.414971 containerd[1682]: time="2025-01-29T12:00:17.414391281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 12:00:19.101887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 12:00:19.111173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:19.211818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:19.216424 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:19.259628 kubelet[2788]: E0129 12:00:19.257842 2788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:19.263293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:19.263423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:24.031172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:24.041965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:24.066705 systemd[1]: Reloading requested from client PID 2803 ('systemctl') (unit session-9.scope)... Jan 29 12:00:24.066863 systemd[1]: Reloading... Jan 29 12:00:24.164643 zram_generator::config[2839]: No configuration found. Jan 29 12:00:24.289936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:00:24.367112 systemd[1]: Reloading finished in 299 ms. Jan 29 12:00:24.410888 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:00:24.410966 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:00:24.411189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:24.418476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:24.512276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:24.518575 (kubelet)[2911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:00:24.562923 kubelet[2911]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:00:24.562923 kubelet[2911]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:00:24.562923 kubelet[2911]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:00:24.564518 kubelet[2911]: I0129 12:00:24.564465 2911 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:00:25.466677 kubelet[2911]: I0129 12:00:25.465887 2911 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:00:25.466677 kubelet[2911]: I0129 12:00:25.465921 2911 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:00:25.466677 kubelet[2911]: I0129 12:00:25.466305 2911 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:00:25.479206 kubelet[2911]: E0129 12:00:25.479155 2911 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.479686 kubelet[2911]: I0129 12:00:25.479663 2911 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:00:25.487392 kubelet[2911]: I0129 12:00:25.487363 2911 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:00:25.487594 kubelet[2911]: I0129 12:00:25.487561 2911 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:00:25.487785 kubelet[2911]: I0129 12:00:25.487592 2911 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-ecab7ceadc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:00:25.487876 kubelet[2911]: I0129 12:00:25.487791 2911 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:00:25.487876 kubelet[2911]: I0129 12:00:25.487800 2911 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:00:25.487942 kubelet[2911]: I0129 12:00:25.487924 2911 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:00:25.488757 kubelet[2911]: I0129 12:00:25.488735 2911 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:00:25.488799 kubelet[2911]: I0129 12:00:25.488761 2911 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:00:25.488799 kubelet[2911]: I0129 12:00:25.488789 2911 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:00:25.488846 kubelet[2911]: I0129 12:00:25.488803 2911 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:00:25.491260 kubelet[2911]: W0129 12:00:25.490923 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.491260 kubelet[2911]: E0129 12:00:25.490971 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.491260 kubelet[2911]: W0129 12:00:25.491207 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-ecab7ceadc&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.491260 kubelet[2911]: E0129 12:00:25.491237 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-ecab7ceadc&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.492362 kubelet[2911]: I0129 12:00:25.491766 2911 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:00:25.492362 kubelet[2911]: I0129 12:00:25.491936 2911 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:00:25.492362 kubelet[2911]: W0129 12:00:25.491977 2911 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:00:25.493310 kubelet[2911]: I0129 12:00:25.493290 2911 server.go:1264] "Started kubelet" Jan 29 12:00:25.495474 kubelet[2911]: I0129 12:00:25.495433 2911 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:00:25.495592 kubelet[2911]: I0129 12:00:25.495444 2911 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:00:25.495933 kubelet[2911]: I0129 12:00:25.495911 2911 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:00:25.496220 kubelet[2911]: E0129 12:00:25.496120 2911 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-ecab7ceadc.181f280edd26e5f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-ecab7ceadc,UID:ci-4081.3.0-a-ecab7ceadc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-ecab7ceadc,},FirstTimestamp:2025-01-29 12:00:25.493267959 +0000 UTC m=+0.971459725,LastTimestamp:2025-01-29 12:00:25.493267959 +0000 UTC m=+0.971459725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-ecab7ceadc,}" Jan 29 12:00:25.496337 kubelet[2911]: I0129 12:00:25.496299 2911 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:00:25.499017 kubelet[2911]: I0129 12:00:25.498980 2911 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:00:25.503168 kubelet[2911]: E0129 12:00:25.501993 2911 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-ecab7ceadc\" not found" Jan 29 12:00:25.503168 kubelet[2911]: I0129 12:00:25.502031 2911 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:00:25.503168 kubelet[2911]: I0129 12:00:25.502128 2911 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:00:25.503168 kubelet[2911]: I0129 12:00:25.502177 2911 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:00:25.503168 kubelet[2911]: W0129 12:00:25.502519 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.503168 kubelet[2911]: E0129 12:00:25.502574 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.503168 kubelet[2911]: E0129 12:00:25.502803 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-ecab7ceadc?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Jan 29 12:00:25.503571 kubelet[2911]: E0129 12:00:25.503551 2911 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:00:25.503809 kubelet[2911]: I0129 12:00:25.503793 2911 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:00:25.503965 kubelet[2911]: I0129 12:00:25.503947 2911 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:00:25.506360 kubelet[2911]: I0129 12:00:25.506333 2911 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:00:25.540303 kubelet[2911]: I0129 12:00:25.540245 2911 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:00:25.540303 kubelet[2911]: I0129 12:00:25.540292 2911 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:00:25.540453 kubelet[2911]: I0129 12:00:25.540313 2911 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:00:25.545173 kubelet[2911]: I0129 12:00:25.545144 2911 policy_none.go:49] "None policy: Start" Jan 29 12:00:25.546159 kubelet[2911]: I0129 12:00:25.546136 2911 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:00:25.546575 kubelet[2911]: I0129 12:00:25.546257 2911 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:00:25.554240 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:00:25.567192 kubelet[2911]: I0129 12:00:25.566859 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:00:25.569458 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:00:25.570566 kubelet[2911]: I0129 12:00:25.570535 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:00:25.570648 kubelet[2911]: I0129 12:00:25.570585 2911 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:00:25.570648 kubelet[2911]: I0129 12:00:25.570602 2911 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:00:25.570707 kubelet[2911]: E0129 12:00:25.570685 2911 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:00:25.571799 kubelet[2911]: W0129 12:00:25.571744 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.571799 kubelet[2911]: E0129 12:00:25.571800 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:25.574961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:00:25.580532 kubelet[2911]: I0129 12:00:25.580489 2911 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:00:25.580760 kubelet[2911]: I0129 12:00:25.580718 2911 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:00:25.580851 kubelet[2911]: I0129 12:00:25.580833 2911 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:00:25.583334 kubelet[2911]: E0129 12:00:25.583314 2911 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-ecab7ceadc\" not found" Jan 29 12:00:25.604197 kubelet[2911]: I0129 12:00:25.604165 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.604526 kubelet[2911]: E0129 12:00:25.604494 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.610066 kubelet[2911]: E0129 12:00:25.609973 2911 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-ecab7ceadc.181f280edd26e5f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-ecab7ceadc,UID:ci-4081.3.0-a-ecab7ceadc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-ecab7ceadc,},FirstTimestamp:2025-01-29 12:00:25.493267959 +0000 UTC m=+0.971459725,LastTimestamp:2025-01-29 12:00:25.493267959 +0000 UTC m=+0.971459725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-ecab7ceadc,}" Jan 29 12:00:25.671306 kubelet[2911]: I0129 12:00:25.671267 2911 topology_manager.go:215] "Topology Admit Handler" podUID="adb5c855e6c8e1e94c32293cdd6cb9fc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.673129 kubelet[2911]: I0129 12:00:25.673102 2911 topology_manager.go:215] "Topology Admit Handler" podUID="edf1a3c4e71770200adc5e57837d6b12" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.674434 kubelet[2911]: I0129 12:00:25.674394 2911 topology_manager.go:215] "Topology Admit Handler" podUID="822dc1e91c16ac007cc1d449b423271b" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.681270 systemd[1]: Created slice kubepods-burstable-podadb5c855e6c8e1e94c32293cdd6cb9fc.slice - libcontainer container kubepods-burstable-podadb5c855e6c8e1e94c32293cdd6cb9fc.slice. Jan 29 12:00:25.703098 kubelet[2911]: I0129 12:00:25.702658 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.703098 kubelet[2911]: I0129 12:00:25.702707 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.703098 kubelet[2911]: I0129 12:00:25.702728 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.703098 kubelet[2911]: I0129 12:00:25.702746 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.703098 kubelet[2911]: I0129 12:00:25.702762 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.703575 kubelet[2911]: E0129 12:00:25.703396 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-ecab7ceadc?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Jan 29 12:00:25.706485 systemd[1]: Created slice kubepods-burstable-podedf1a3c4e71770200adc5e57837d6b12.slice - libcontainer container kubepods-burstable-podedf1a3c4e71770200adc5e57837d6b12.slice. Jan 29 12:00:25.714535 systemd[1]: Created slice kubepods-burstable-pod822dc1e91c16ac007cc1d449b423271b.slice - libcontainer container kubepods-burstable-pod822dc1e91c16ac007cc1d449b423271b.slice. Jan 29 12:00:25.804039 kubelet[2911]: I0129 12:00:25.803774 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/adb5c855e6c8e1e94c32293cdd6cb9fc-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-ecab7ceadc\" (UID: \"adb5c855e6c8e1e94c32293cdd6cb9fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.804039 kubelet[2911]: I0129 12:00:25.803815 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adb5c855e6c8e1e94c32293cdd6cb9fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-ecab7ceadc\" (UID: \"adb5c855e6c8e1e94c32293cdd6cb9fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.804039 kubelet[2911]: I0129 12:00:25.803834 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/adb5c855e6c8e1e94c32293cdd6cb9fc-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-ecab7ceadc\" (UID: \"adb5c855e6c8e1e94c32293cdd6cb9fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.804039 kubelet[2911]: I0129 12:00:25.803875 2911 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/822dc1e91c16ac007cc1d449b423271b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-ecab7ceadc\" (UID: \"822dc1e91c16ac007cc1d449b423271b\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.807214 kubelet[2911]: I0129 12:00:25.806923 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:25.807345 kubelet[2911]: E0129 12:00:25.807252 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:26.004912 containerd[1682]: time="2025-01-29T12:00:26.004837228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-ecab7ceadc,Uid:adb5c855e6c8e1e94c32293cdd6cb9fc,Namespace:kube-system,Attempt:0,}" Jan 29 12:00:26.013363 containerd[1682]: time="2025-01-29T12:00:26.013083757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-ecab7ceadc,Uid:edf1a3c4e71770200adc5e57837d6b12,Namespace:kube-system,Attempt:0,}" Jan 29 12:00:26.018545 containerd[1682]: time="2025-01-29T12:00:26.018276283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-ecab7ceadc,Uid:822dc1e91c16ac007cc1d449b423271b,Namespace:kube-system,Attempt:0,}" Jan 29 12:00:26.104420 kubelet[2911]: E0129 12:00:26.104376 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-ecab7ceadc?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Jan 29 12:00:26.212074 kubelet[2911]: I0129 12:00:26.212042 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:26.212432 kubelet[2911]: E0129 12:00:26.212404 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:26.441937 kubelet[2911]: W0129 12:00:26.441807 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-ecab7ceadc&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:26.441937 kubelet[2911]: E0129 12:00:26.441868 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-ecab7ceadc&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:26.599641 kubelet[2911]: W0129 12:00:26.599560 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:26.599641 kubelet[2911]: E0129 12:00:26.599635 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:26.732976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924519582.mount: Deactivated successfully. Jan 29 12:00:26.759401 containerd[1682]: time="2025-01-29T12:00:26.758576713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:00:26.761057 kubelet[2911]: W0129 12:00:26.761005 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:26.761205 kubelet[2911]: E0129 12:00:26.761193 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:26.771457 containerd[1682]: time="2025-01-29T12:00:26.771418926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 12:00:26.776583 containerd[1682]: time="2025-01-29T12:00:26.776546731Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:00:26.780046 containerd[1682]: time="2025-01-29T12:00:26.780011055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:00:26.784635 containerd[1682]: time="2025-01-29T12:00:26.783737859Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:00:26.788838 containerd[1682]: time="2025-01-29T12:00:26.788801744Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:00:26.791109 containerd[1682]: time="2025-01-29T12:00:26.791077906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:00:26.797442 containerd[1682]: time="2025-01-29T12:00:26.797402673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 779.04847ms" Jan 29 12:00:26.799129 containerd[1682]: time="2025-01-29T12:00:26.797664953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:00:26.799636 containerd[1682]: time="2025-01-29T12:00:26.798440514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 793.526845ms" Jan 29 12:00:26.799760 containerd[1682]: time="2025-01-29T12:00:26.798900035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 785.737878ms" Jan 29 12:00:26.904988 kubelet[2911]: E0129 12:00:26.904944 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-ecab7ceadc?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Jan 29 12:00:27.014476 kubelet[2911]: I0129 12:00:27.014370 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:27.014764 kubelet[2911]: E0129 12:00:27.014715 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:27.033415 kubelet[2911]: W0129 12:00:27.033375 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:27.033415 kubelet[2911]: E0129 12:00:27.033418 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:27.557336 kubelet[2911]: E0129 12:00:27.557295 2911 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:28.081202 containerd[1682]: time="2025-01-29T12:00:28.081088695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:28.081864 containerd[1682]: time="2025-01-29T12:00:28.081734855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:28.081864 containerd[1682]: time="2025-01-29T12:00:28.081814895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:28.082878 containerd[1682]: time="2025-01-29T12:00:28.082823456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:28.090777 containerd[1682]: time="2025-01-29T12:00:28.090301064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:28.090777 containerd[1682]: time="2025-01-29T12:00:28.090357104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:28.090777 containerd[1682]: time="2025-01-29T12:00:28.090372784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:28.090777 containerd[1682]: time="2025-01-29T12:00:28.090449424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:28.092453 containerd[1682]: time="2025-01-29T12:00:28.092358186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:28.092668 containerd[1682]: time="2025-01-29T12:00:28.092599027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:28.092858 containerd[1682]: time="2025-01-29T12:00:28.092818587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:28.093320 containerd[1682]: time="2025-01-29T12:00:28.093140307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:28.131846 systemd[1]: Started cri-containerd-18c9252d4b9240ffd419a3726775adc5d9af17ad48265558eaa842fdc84c10a4.scope - libcontainer container 18c9252d4b9240ffd419a3726775adc5d9af17ad48265558eaa842fdc84c10a4. Jan 29 12:00:28.143846 systemd[1]: Started cri-containerd-4c816c8575ca97271e77de24a474e1f52e7fac5314fa48fa8652408a99354098.scope - libcontainer container 4c816c8575ca97271e77de24a474e1f52e7fac5314fa48fa8652408a99354098. Jan 29 12:00:28.146875 systemd[1]: Started cri-containerd-50c01392e64077014db7b907527f698944ce059afc5cf7d49883d966fa1803e4.scope - libcontainer container 50c01392e64077014db7b907527f698944ce059afc5cf7d49883d966fa1803e4. Jan 29 12:00:28.189240 containerd[1682]: time="2025-01-29T12:00:28.189066927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-ecab7ceadc,Uid:adb5c855e6c8e1e94c32293cdd6cb9fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c9252d4b9240ffd419a3726775adc5d9af17ad48265558eaa842fdc84c10a4\"" Jan 29 12:00:28.195456 containerd[1682]: time="2025-01-29T12:00:28.195239334Z" level=info msg="CreateContainer within sandbox \"18c9252d4b9240ffd419a3726775adc5d9af17ad48265558eaa842fdc84c10a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:00:28.205128 containerd[1682]: time="2025-01-29T12:00:28.204953024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-ecab7ceadc,Uid:edf1a3c4e71770200adc5e57837d6b12,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c816c8575ca97271e77de24a474e1f52e7fac5314fa48fa8652408a99354098\"" Jan 29 12:00:28.209190 containerd[1682]: time="2025-01-29T12:00:28.209078988Z" level=info msg="CreateContainer within sandbox \"4c816c8575ca97271e77de24a474e1f52e7fac5314fa48fa8652408a99354098\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:00:28.213435 containerd[1682]: time="2025-01-29T12:00:28.213326153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-ecab7ceadc,Uid:822dc1e91c16ac007cc1d449b423271b,Namespace:kube-system,Attempt:0,} returns sandbox id \"50c01392e64077014db7b907527f698944ce059afc5cf7d49883d966fa1803e4\"" Jan 29 12:00:28.216642 containerd[1682]: time="2025-01-29T12:00:28.216515916Z" level=info msg="CreateContainer within sandbox \"50c01392e64077014db7b907527f698944ce059afc5cf7d49883d966fa1803e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:00:28.505719 kubelet[2911]: E0129 12:00:28.505675 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-ecab7ceadc?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="3.2s" Jan 29 12:00:28.521019 kubelet[2911]: W0129 12:00:28.520987 2911 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:28.530820 kubelet[2911]: E0129 12:00:28.521027 2911 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jan 29 12:00:28.580453 containerd[1682]: time="2025-01-29T12:00:28.580406656Z" level=info msg="CreateContainer within sandbox \"18c9252d4b9240ffd419a3726775adc5d9af17ad48265558eaa842fdc84c10a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b86b8ec774eee1edab7caeac11dda4d78346ec7ba06f64ffb56814c93cad8e16\"" Jan 29 12:00:28.581320 containerd[1682]: time="2025-01-29T12:00:28.581273657Z" level=info msg="StartContainer for \"b86b8ec774eee1edab7caeac11dda4d78346ec7ba06f64ffb56814c93cad8e16\"" Jan 29 12:00:28.593785 containerd[1682]: time="2025-01-29T12:00:28.593356470Z" level=info msg="CreateContainer within sandbox \"50c01392e64077014db7b907527f698944ce059afc5cf7d49883d966fa1803e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f87cb83f501e554cecd5b6999c3df6254c01efb627282090d6060a69c184ec00\"" Jan 29 12:00:28.594039 containerd[1682]: time="2025-01-29T12:00:28.593995911Z" level=info msg="StartContainer for \"f87cb83f501e554cecd5b6999c3df6254c01efb627282090d6060a69c184ec00\"" Jan 29 12:00:28.599037 containerd[1682]: time="2025-01-29T12:00:28.598814436Z" level=info msg="CreateContainer within sandbox \"4c816c8575ca97271e77de24a474e1f52e7fac5314fa48fa8652408a99354098\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dfeaa475b1ba04dba8736be31fd5b644841133e2b387c9e7adaf44f5ef82de57\"" Jan 29 12:00:28.601092 containerd[1682]: time="2025-01-29T12:00:28.600001117Z" level=info msg="StartContainer for \"dfeaa475b1ba04dba8736be31fd5b644841133e2b387c9e7adaf44f5ef82de57\"" Jan 29 12:00:28.612171 systemd[1]: Started cri-containerd-b86b8ec774eee1edab7caeac11dda4d78346ec7ba06f64ffb56814c93cad8e16.scope - libcontainer container b86b8ec774eee1edab7caeac11dda4d78346ec7ba06f64ffb56814c93cad8e16. Jan 29 12:00:28.621944 kubelet[2911]: I0129 12:00:28.621565 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:28.623843 kubelet[2911]: E0129 12:00:28.623796 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:28.630927 systemd[1]: Started cri-containerd-f87cb83f501e554cecd5b6999c3df6254c01efb627282090d6060a69c184ec00.scope - libcontainer container f87cb83f501e554cecd5b6999c3df6254c01efb627282090d6060a69c184ec00. Jan 29 12:00:28.639843 systemd[1]: Started cri-containerd-dfeaa475b1ba04dba8736be31fd5b644841133e2b387c9e7adaf44f5ef82de57.scope - libcontainer container dfeaa475b1ba04dba8736be31fd5b644841133e2b387c9e7adaf44f5ef82de57. Jan 29 12:00:28.680269 containerd[1682]: time="2025-01-29T12:00:28.679971800Z" level=info msg="StartContainer for \"b86b8ec774eee1edab7caeac11dda4d78346ec7ba06f64ffb56814c93cad8e16\" returns successfully" Jan 29 12:00:28.689369 containerd[1682]: time="2025-01-29T12:00:28.689327010Z" level=info msg="StartContainer for \"f87cb83f501e554cecd5b6999c3df6254c01efb627282090d6060a69c184ec00\" returns successfully" Jan 29 12:00:28.716774 containerd[1682]: time="2025-01-29T12:00:28.716727079Z" level=info msg="StartContainer for \"dfeaa475b1ba04dba8736be31fd5b644841133e2b387c9e7adaf44f5ef82de57\" returns successfully" Jan 29 12:00:31.493895 kubelet[2911]: I0129 12:00:31.493641 2911 apiserver.go:52] "Watching apiserver" Jan 29 12:00:31.501987 kubelet[2911]: E0129 12:00:31.501948 2911 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.0-a-ecab7ceadc" not found Jan 29 12:00:31.503032 kubelet[2911]: I0129 12:00:31.503015 2911 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:00:31.709399 kubelet[2911]: E0129 12:00:31.709357 2911 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-ecab7ceadc\" not found" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:31.825836 kubelet[2911]: I0129 12:00:31.825729 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:31.835415 kubelet[2911]: I0129 12:00:31.835349 2911 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:32.331375 kubelet[2911]: W0129 12:00:32.331336 2911 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:00:33.453867 systemd[1]: Reloading requested from client PID 3186 ('systemctl') (unit session-9.scope)... Jan 29 12:00:33.453883 systemd[1]: Reloading... Jan 29 12:00:33.548732 zram_generator::config[3229]: No configuration found. Jan 29 12:00:33.671264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:00:33.763481 systemd[1]: Reloading finished in 309 ms. Jan 29 12:00:33.798763 kubelet[2911]: I0129 12:00:33.798683 2911 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:00:33.799142 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:33.810907 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:00:33.811174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:33.811239 systemd[1]: kubelet.service: Consumed 1.313s CPU time, 113.3M memory peak, 0B memory swap peak. Jan 29 12:00:33.815017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:33.980089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:33.991431 (kubelet)[3290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:00:34.052518 kubelet[3290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:00:34.052518 kubelet[3290]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:00:34.052518 kubelet[3290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:00:34.052518 kubelet[3290]: I0129 12:00:34.052090 3290 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:00:34.060694 kubelet[3290]: I0129 12:00:34.060002 3290 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:00:34.060694 kubelet[3290]: I0129 12:00:34.060030 3290 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:00:34.060694 kubelet[3290]: I0129 12:00:34.060234 3290 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:00:34.063101 kubelet[3290]: I0129 12:00:34.063063 3290 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:00:34.067857 kubelet[3290]: I0129 12:00:34.067824 3290 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:00:34.078516 kubelet[3290]: I0129 12:00:34.077606 3290 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:00:34.078516 kubelet[3290]: I0129 12:00:34.077883 3290 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:00:34.078516 kubelet[3290]: I0129 12:00:34.077912 3290 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-ecab7ceadc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:00:34.078516 kubelet[3290]: I0129 12:00:34.078096 3290 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:00:34.078806 kubelet[3290]: I0129 12:00:34.078105 3290 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:00:34.078806 kubelet[3290]: I0129 12:00:34.078146 3290 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:00:34.078806 kubelet[3290]: I0129 12:00:34.078281 3290 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:00:34.078806 kubelet[3290]: I0129 12:00:34.078302 3290 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:00:34.078806 kubelet[3290]: I0129 12:00:34.078329 3290 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:00:34.078806 kubelet[3290]: I0129 12:00:34.078346 3290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:00:34.080574 kubelet[3290]: I0129 12:00:34.080245 3290 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:00:34.080574 kubelet[3290]: I0129 12:00:34.080412 3290 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:00:34.081037 kubelet[3290]: I0129 12:00:34.080822 3290 server.go:1264] "Started kubelet" Jan 29 12:00:34.084774 kubelet[3290]: I0129 12:00:34.084740 3290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:00:34.098637 kubelet[3290]: I0129 12:00:34.094795 3290 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:00:34.098637 kubelet[3290]: I0129 12:00:34.095765 3290 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:00:34.098891 kubelet[3290]: I0129 12:00:34.098827 3290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:00:34.099889 kubelet[3290]: I0129 12:00:34.099872 3290 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:00:34.100033 kubelet[3290]: I0129 12:00:34.100020 3290 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:00:34.101643 kubelet[3290]: I0129 12:00:34.100583 3290 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:00:34.102410 kubelet[3290]: I0129 12:00:34.100827 3290 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:00:34.105629 kubelet[3290]: I0129 12:00:34.104544 3290 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:00:34.105629 kubelet[3290]: I0129 12:00:34.104802 3290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:00:34.107245 kubelet[3290]: E0129 12:00:34.107195 3290 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:00:34.108445 kubelet[3290]: I0129 12:00:34.108414 3290 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:00:34.117703 kubelet[3290]: I0129 12:00:34.116598 3290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:00:34.130549 kubelet[3290]: I0129 12:00:34.130492 3290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:00:34.130549 kubelet[3290]: I0129 12:00:34.130544 3290 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:00:34.130549 kubelet[3290]: I0129 12:00:34.130561 3290 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:00:34.132235 kubelet[3290]: E0129 12:00:34.130611 3290 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:00:34.172507 kubelet[3290]: I0129 12:00:34.172474 3290 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:00:34.172507 kubelet[3290]: I0129 12:00:34.172493 3290 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:00:34.172507 kubelet[3290]: I0129 12:00:34.172513 3290 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:00:34.172720 kubelet[3290]: I0129 12:00:34.172698 3290 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:00:34.172746 kubelet[3290]: I0129 12:00:34.172709 3290 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:00:34.172746 kubelet[3290]: I0129 12:00:34.172728 3290 policy_none.go:49] "None policy: Start" Jan 29 12:00:34.173521 kubelet[3290]: I0129 12:00:34.173474 3290 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:00:34.173521 kubelet[3290]: I0129 12:00:34.173500 3290 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:00:34.173727 kubelet[3290]: I0129 12:00:34.173671 3290 state_mem.go:75] "Updated machine memory state" Jan 29 12:00:34.178077 kubelet[3290]: I0129 12:00:34.178048 3290 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:00:34.178273 kubelet[3290]: I0129 12:00:34.178230 3290 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:00:34.178350 kubelet[3290]: I0129 12:00:34.178335 3290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:00:34.203713 kubelet[3290]: I0129 12:00:34.203667 3290 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.215434 kubelet[3290]: I0129 12:00:34.215393 3290 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.215575 kubelet[3290]: I0129 12:00:34.215481 3290 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.232535 kubelet[3290]: I0129 12:00:34.232497 3290 topology_manager.go:215] "Topology Admit Handler" podUID="adb5c855e6c8e1e94c32293cdd6cb9fc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.232695 kubelet[3290]: I0129 12:00:34.232653 3290 topology_manager.go:215] "Topology Admit Handler" podUID="edf1a3c4e71770200adc5e57837d6b12" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.232739 kubelet[3290]: I0129 12:00:34.232696 3290 topology_manager.go:215] "Topology Admit Handler" podUID="822dc1e91c16ac007cc1d449b423271b" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.240336 kubelet[3290]: W0129 12:00:34.240161 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:00:34.244329 kubelet[3290]: W0129 12:00:34.244170 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:00:34.245142 kubelet[3290]: W0129 12:00:34.245116 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:00:34.245242 kubelet[3290]: E0129 12:00:34.245174 3290 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.304691 kubelet[3290]: I0129 12:00:34.304483 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/822dc1e91c16ac007cc1d449b423271b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-ecab7ceadc\" (UID: \"822dc1e91c16ac007cc1d449b423271b\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.304691 kubelet[3290]: I0129 12:00:34.304522 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.304691 kubelet[3290]: I0129 12:00:34.304548 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.304691 kubelet[3290]: I0129 12:00:34.304573 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.304691 kubelet[3290]: I0129 12:00:34.304592 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.304944 kubelet[3290]: I0129 12:00:34.304607 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/adb5c855e6c8e1e94c32293cdd6cb9fc-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-ecab7ceadc\" (UID: \"adb5c855e6c8e1e94c32293cdd6cb9fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.304944 kubelet[3290]: I0129 12:00:34.304647 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/adb5c855e6c8e1e94c32293cdd6cb9fc-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-ecab7ceadc\" (UID: \"adb5c855e6c8e1e94c32293cdd6cb9fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.305454 kubelet[3290]: I0129 12:00:34.304666 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/adb5c855e6c8e1e94c32293cdd6cb9fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-ecab7ceadc\" (UID: \"adb5c855e6c8e1e94c32293cdd6cb9fc\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:34.305454 kubelet[3290]: I0129 12:00:34.305353 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edf1a3c4e71770200adc5e57837d6b12-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" (UID: \"edf1a3c4e71770200adc5e57837d6b12\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:35.080057 kubelet[3290]: I0129 12:00:35.079999 3290 apiserver.go:52] "Watching apiserver" Jan 29 12:00:35.102734 kubelet[3290]: I0129 12:00:35.102691 3290 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:00:35.177841 kubelet[3290]: W0129 12:00:35.176846 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:00:35.177841 kubelet[3290]: E0129 12:00:35.176918 3290 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-ecab7ceadc\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:35.177841 kubelet[3290]: W0129 12:00:35.177396 3290 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:00:35.177841 kubelet[3290]: E0129 12:00:35.177427 3290 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-ecab7ceadc\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" Jan 29 12:00:35.261162 kubelet[3290]: I0129 12:00:35.261103 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-ecab7ceadc" podStartSLOduration=1.261084439 podStartE2EDuration="1.261084439s" podCreationTimestamp="2025-01-29 12:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:35.230114486 +0000 UTC m=+1.235271813" watchObservedRunningTime="2025-01-29 12:00:35.261084439 +0000 UTC m=+1.266241766" Jan 29 12:00:35.295796 kubelet[3290]: I0129 12:00:35.295722 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-ecab7ceadc" podStartSLOduration=1.295702715 podStartE2EDuration="1.295702715s" podCreationTimestamp="2025-01-29 12:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:35.265096123 +0000 UTC m=+1.270253450" watchObservedRunningTime="2025-01-29 12:00:35.295702715 +0000 UTC m=+1.300860042" Jan 29 12:00:35.331719 kubelet[3290]: I0129 12:00:35.331561 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-ecab7ceadc" podStartSLOduration=3.331541913 podStartE2EDuration="3.331541913s" podCreationTimestamp="2025-01-29 12:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:35.296023595 +0000 UTC m=+1.301180922" watchObservedRunningTime="2025-01-29 12:00:35.331541913 +0000 UTC m=+1.336699200" Jan 29 12:00:39.246262 sudo[2358]: pam_unix(sudo:session): session closed for user root Jan 29 12:00:39.325890 sshd[2355]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:39.330056 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:46730.service: Deactivated successfully. Jan 29 12:00:39.332169 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:00:39.332379 systemd[1]: session-9.scope: Consumed 7.558s CPU time, 188.2M memory peak, 0B memory swap peak. Jan 29 12:00:39.332973 systemd-logind[1660]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:00:39.333923 systemd-logind[1660]: Removed session 9. Jan 29 12:00:48.340369 kubelet[3290]: I0129 12:00:48.340335 3290 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:00:48.341777 containerd[1682]: time="2025-01-29T12:00:48.341153789Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:00:48.343016 kubelet[3290]: I0129 12:00:48.341404 3290 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:00:48.632595 kubelet[3290]: I0129 12:00:48.631570 3290 topology_manager.go:215] "Topology Admit Handler" podUID="8defd5f9-41d5-4966-b55b-277024de19e4" podNamespace="kube-system" podName="kube-proxy-dzrjc" Jan 29 12:00:48.642188 systemd[1]: Created slice kubepods-besteffort-pod8defd5f9_41d5_4966_b55b_277024de19e4.slice - libcontainer container kubepods-besteffort-pod8defd5f9_41d5_4966_b55b_277024de19e4.slice. Jan 29 12:00:48.699009 kubelet[3290]: I0129 12:00:48.698847 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8defd5f9-41d5-4966-b55b-277024de19e4-kube-proxy\") pod \"kube-proxy-dzrjc\" (UID: \"8defd5f9-41d5-4966-b55b-277024de19e4\") " pod="kube-system/kube-proxy-dzrjc" Jan 29 12:00:48.699009 kubelet[3290]: I0129 12:00:48.698895 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8defd5f9-41d5-4966-b55b-277024de19e4-xtables-lock\") pod \"kube-proxy-dzrjc\" (UID: \"8defd5f9-41d5-4966-b55b-277024de19e4\") " pod="kube-system/kube-proxy-dzrjc" Jan 29 12:00:48.699009 kubelet[3290]: I0129 12:00:48.698916 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8defd5f9-41d5-4966-b55b-277024de19e4-lib-modules\") pod \"kube-proxy-dzrjc\" (UID: \"8defd5f9-41d5-4966-b55b-277024de19e4\") " pod="kube-system/kube-proxy-dzrjc" Jan 29 12:00:48.699009 kubelet[3290]: I0129 12:00:48.698934 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9w5\" (UniqueName: \"kubernetes.io/projected/8defd5f9-41d5-4966-b55b-277024de19e4-kube-api-access-4q9w5\") pod \"kube-proxy-dzrjc\" (UID: \"8defd5f9-41d5-4966-b55b-277024de19e4\") " pod="kube-system/kube-proxy-dzrjc" Jan 29 12:00:48.807317 kubelet[3290]: E0129 12:00:48.807032 3290 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:00:48.807317 kubelet[3290]: E0129 12:00:48.807072 3290 projected.go:200] Error preparing data for projected volume kube-api-access-4q9w5 for pod kube-system/kube-proxy-dzrjc: configmap "kube-root-ca.crt" not found Jan 29 12:00:48.807317 kubelet[3290]: E0129 12:00:48.807130 3290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8defd5f9-41d5-4966-b55b-277024de19e4-kube-api-access-4q9w5 podName:8defd5f9-41d5-4966-b55b-277024de19e4 nodeName:}" failed. No retries permitted until 2025-01-29 12:00:49.307110143 +0000 UTC m=+15.312267470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4q9w5" (UniqueName: "kubernetes.io/projected/8defd5f9-41d5-4966-b55b-277024de19e4-kube-api-access-4q9w5") pod "kube-proxy-dzrjc" (UID: "8defd5f9-41d5-4966-b55b-277024de19e4") : configmap "kube-root-ca.crt" not found Jan 29 12:00:49.412755 kubelet[3290]: I0129 12:00:49.411047 3290 topology_manager.go:215] "Topology Admit Handler" podUID="0c0df586-b320-46f7-a19b-6aa8c7b8c6c4" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-7cd6z" Jan 29 12:00:49.420047 systemd[1]: Created slice kubepods-besteffort-pod0c0df586_b320_46f7_a19b_6aa8c7b8c6c4.slice - libcontainer container kubepods-besteffort-pod0c0df586_b320_46f7_a19b_6aa8c7b8c6c4.slice. Jan 29 12:00:49.504416 kubelet[3290]: I0129 12:00:49.504370 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zq9v\" (UniqueName: \"kubernetes.io/projected/0c0df586-b320-46f7-a19b-6aa8c7b8c6c4-kube-api-access-9zq9v\") pod \"tigera-operator-7bc55997bb-7cd6z\" (UID: \"0c0df586-b320-46f7-a19b-6aa8c7b8c6c4\") " pod="tigera-operator/tigera-operator-7bc55997bb-7cd6z" Jan 29 12:00:49.504416 kubelet[3290]: I0129 12:00:49.504424 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c0df586-b320-46f7-a19b-6aa8c7b8c6c4-var-lib-calico\") pod \"tigera-operator-7bc55997bb-7cd6z\" (UID: \"0c0df586-b320-46f7-a19b-6aa8c7b8c6c4\") " pod="tigera-operator/tigera-operator-7bc55997bb-7cd6z" Jan 29 12:00:49.550164 containerd[1682]: time="2025-01-29T12:00:49.550119657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzrjc,Uid:8defd5f9-41d5-4966-b55b-277024de19e4,Namespace:kube-system,Attempt:0,}" Jan 29 12:00:49.594309 containerd[1682]: time="2025-01-29T12:00:49.594159862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:49.594309 containerd[1682]: time="2025-01-29T12:00:49.594227142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:49.594309 containerd[1682]: time="2025-01-29T12:00:49.594246102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:49.594566 containerd[1682]: time="2025-01-29T12:00:49.594348782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:49.619849 systemd[1]: Started cri-containerd-bd39e9f8fb516a20b4c7f71146d6f4314ac3437577afb7ccd80ccbd1930248c6.scope - libcontainer container bd39e9f8fb516a20b4c7f71146d6f4314ac3437577afb7ccd80ccbd1930248c6. Jan 29 12:00:49.641880 containerd[1682]: time="2025-01-29T12:00:49.641707590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzrjc,Uid:8defd5f9-41d5-4966-b55b-277024de19e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd39e9f8fb516a20b4c7f71146d6f4314ac3437577afb7ccd80ccbd1930248c6\"" Jan 29 12:00:49.645285 containerd[1682]: time="2025-01-29T12:00:49.645145833Z" level=info msg="CreateContainer within sandbox \"bd39e9f8fb516a20b4c7f71146d6f4314ac3437577afb7ccd80ccbd1930248c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:00:49.686463 containerd[1682]: time="2025-01-29T12:00:49.686259195Z" level=info msg="CreateContainer within sandbox \"bd39e9f8fb516a20b4c7f71146d6f4314ac3437577afb7ccd80ccbd1930248c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc64acf4b741a4c7992f187ae5536a3caedece8cab61fcbc20b49eff7dc3f6d9\"" Jan 29 12:00:49.687606 containerd[1682]: time="2025-01-29T12:00:49.687548156Z" level=info msg="StartContainer for \"cc64acf4b741a4c7992f187ae5536a3caedece8cab61fcbc20b49eff7dc3f6d9\"" Jan 29 12:00:49.712809 systemd[1]: Started cri-containerd-cc64acf4b741a4c7992f187ae5536a3caedece8cab61fcbc20b49eff7dc3f6d9.scope - libcontainer container cc64acf4b741a4c7992f187ae5536a3caedece8cab61fcbc20b49eff7dc3f6d9. Jan 29 12:00:49.723759 containerd[1682]: time="2025-01-29T12:00:49.723712233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7cd6z,Uid:0c0df586-b320-46f7-a19b-6aa8c7b8c6c4,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:00:49.748641 containerd[1682]: time="2025-01-29T12:00:49.748430978Z" level=info msg="StartContainer for \"cc64acf4b741a4c7992f187ae5536a3caedece8cab61fcbc20b49eff7dc3f6d9\" returns successfully" Jan 29 12:00:49.772925 containerd[1682]: time="2025-01-29T12:00:49.772797043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:49.772925 containerd[1682]: time="2025-01-29T12:00:49.772871003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:49.772925 containerd[1682]: time="2025-01-29T12:00:49.772882563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:49.773172 containerd[1682]: time="2025-01-29T12:00:49.773024243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:49.793865 systemd[1]: Started cri-containerd-c82bbe4e703636b433e6446f73621e76237a79dba6ad94ef626bdb88b4379165.scope - libcontainer container c82bbe4e703636b433e6446f73621e76237a79dba6ad94ef626bdb88b4379165. Jan 29 12:00:49.830474 containerd[1682]: time="2025-01-29T12:00:49.829956301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7cd6z,Uid:0c0df586-b320-46f7-a19b-6aa8c7b8c6c4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c82bbe4e703636b433e6446f73621e76237a79dba6ad94ef626bdb88b4379165\"" Jan 29 12:00:49.834466 containerd[1682]: time="2025-01-29T12:00:49.834218505Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:00:50.205824 kubelet[3290]: I0129 12:00:50.205754 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dzrjc" podStartSLOduration=2.205736403 podStartE2EDuration="2.205736403s" podCreationTimestamp="2025-01-29 12:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:50.205559242 +0000 UTC m=+16.210716569" watchObservedRunningTime="2025-01-29 12:00:50.205736403 +0000 UTC m=+16.210893730" Jan 29 12:00:51.326611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410632654.mount: Deactivated successfully. Jan 29 12:00:51.676834 containerd[1682]: time="2025-01-29T12:00:51.676720473Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:51.679834 containerd[1682]: time="2025-01-29T12:00:51.679778918Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 12:00:51.682497 containerd[1682]: time="2025-01-29T12:00:51.682447081Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:51.687121 containerd[1682]: time="2025-01-29T12:00:51.687043608Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:51.688006 containerd[1682]: time="2025-01-29T12:00:51.687863929Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.853604064s" Jan 29 12:00:51.688006 containerd[1682]: time="2025-01-29T12:00:51.687901529Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 12:00:51.690979 containerd[1682]: time="2025-01-29T12:00:51.690921614Z" level=info msg="CreateContainer within sandbox \"c82bbe4e703636b433e6446f73621e76237a79dba6ad94ef626bdb88b4379165\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:00:51.727664 containerd[1682]: time="2025-01-29T12:00:51.727594466Z" level=info msg="CreateContainer within sandbox \"c82bbe4e703636b433e6446f73621e76237a79dba6ad94ef626bdb88b4379165\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ca0e7e8a4c3f2b650b50984f3e4b3252a5d4bab4d030b1835fd4072c722a6794\"" Jan 29 12:00:51.729506 containerd[1682]: time="2025-01-29T12:00:51.728314747Z" level=info msg="StartContainer for \"ca0e7e8a4c3f2b650b50984f3e4b3252a5d4bab4d030b1835fd4072c722a6794\"" Jan 29 12:00:51.755814 systemd[1]: Started cri-containerd-ca0e7e8a4c3f2b650b50984f3e4b3252a5d4bab4d030b1835fd4072c722a6794.scope - libcontainer container ca0e7e8a4c3f2b650b50984f3e4b3252a5d4bab4d030b1835fd4072c722a6794. Jan 29 12:00:51.794128 containerd[1682]: time="2025-01-29T12:00:51.794071402Z" level=info msg="StartContainer for \"ca0e7e8a4c3f2b650b50984f3e4b3252a5d4bab4d030b1835fd4072c722a6794\" returns successfully" Jan 29 12:00:55.602667 kubelet[3290]: I0129 12:00:55.601899 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-7cd6z" podStartSLOduration=4.745344369 podStartE2EDuration="6.601876516s" podCreationTimestamp="2025-01-29 12:00:49 +0000 UTC" firstStartedPulling="2025-01-29 12:00:49.832440184 +0000 UTC m=+15.837597511" lastFinishedPulling="2025-01-29 12:00:51.688972371 +0000 UTC m=+17.694129658" observedRunningTime="2025-01-29 12:00:52.208382598 +0000 UTC m=+18.213539925" watchObservedRunningTime="2025-01-29 12:00:55.601876516 +0000 UTC m=+21.607033843" Jan 29 12:00:55.602667 kubelet[3290]: I0129 12:00:55.602102 3290 topology_manager.go:215] "Topology Admit Handler" podUID="a4990856-fda6-464c-b9c9-3a624bb6b4bd" podNamespace="calico-system" podName="calico-typha-7f7c6f9846-ggnwh" Jan 29 12:00:55.614695 systemd[1]: Created slice kubepods-besteffort-poda4990856_fda6_464c_b9c9_3a624bb6b4bd.slice - libcontainer container kubepods-besteffort-poda4990856_fda6_464c_b9c9_3a624bb6b4bd.slice. Jan 29 12:00:55.640358 kubelet[3290]: I0129 12:00:55.640238 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a4990856-fda6-464c-b9c9-3a624bb6b4bd-typha-certs\") pod \"calico-typha-7f7c6f9846-ggnwh\" (UID: \"a4990856-fda6-464c-b9c9-3a624bb6b4bd\") " pod="calico-system/calico-typha-7f7c6f9846-ggnwh" Jan 29 12:00:55.640358 kubelet[3290]: I0129 12:00:55.640275 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4990856-fda6-464c-b9c9-3a624bb6b4bd-tigera-ca-bundle\") pod \"calico-typha-7f7c6f9846-ggnwh\" (UID: \"a4990856-fda6-464c-b9c9-3a624bb6b4bd\") " pod="calico-system/calico-typha-7f7c6f9846-ggnwh" Jan 29 12:00:55.640358 kubelet[3290]: I0129 12:00:55.640295 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmpns\" (UniqueName: \"kubernetes.io/projected/a4990856-fda6-464c-b9c9-3a624bb6b4bd-kube-api-access-jmpns\") pod \"calico-typha-7f7c6f9846-ggnwh\" (UID: \"a4990856-fda6-464c-b9c9-3a624bb6b4bd\") " pod="calico-system/calico-typha-7f7c6f9846-ggnwh" Jan 29 12:00:55.699698 kubelet[3290]: I0129 12:00:55.699643 3290 topology_manager.go:215] "Topology Admit Handler" podUID="342cbe97-f22f-4415-ba7b-58f5a684e722" podNamespace="calico-system" podName="calico-node-srwq9" Jan 29 12:00:55.709657 systemd[1]: Created slice kubepods-besteffort-pod342cbe97_f22f_4415_ba7b_58f5a684e722.slice - libcontainer container kubepods-besteffort-pod342cbe97_f22f_4415_ba7b_58f5a684e722.slice. Jan 29 12:00:55.742655 kubelet[3290]: I0129 12:00:55.741389 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-cni-log-dir\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742655 kubelet[3290]: I0129 12:00:55.741430 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-lib-modules\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742655 kubelet[3290]: I0129 12:00:55.741453 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkzv\" (UniqueName: \"kubernetes.io/projected/342cbe97-f22f-4415-ba7b-58f5a684e722-kube-api-access-qbkzv\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742655 kubelet[3290]: I0129 12:00:55.741470 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/342cbe97-f22f-4415-ba7b-58f5a684e722-node-certs\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742655 kubelet[3290]: I0129 12:00:55.741487 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-var-run-calico\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742880 kubelet[3290]: I0129 12:00:55.741503 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-cni-bin-dir\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742880 kubelet[3290]: I0129 12:00:55.741530 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-xtables-lock\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742880 kubelet[3290]: I0129 12:00:55.741544 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-policysync\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742880 kubelet[3290]: I0129 12:00:55.741561 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/342cbe97-f22f-4415-ba7b-58f5a684e722-tigera-ca-bundle\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.742880 kubelet[3290]: I0129 12:00:55.741587 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-var-lib-calico\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.743001 kubelet[3290]: I0129 12:00:55.741604 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-cni-net-dir\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.743001 kubelet[3290]: I0129 12:00:55.741641 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/342cbe97-f22f-4415-ba7b-58f5a684e722-flexvol-driver-host\") pod \"calico-node-srwq9\" (UID: \"342cbe97-f22f-4415-ba7b-58f5a684e722\") " pod="calico-system/calico-node-srwq9" Jan 29 12:00:55.827844 kubelet[3290]: I0129 12:00:55.827529 3290 topology_manager.go:215] "Topology Admit Handler" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" podNamespace="calico-system" podName="csi-node-driver-6b7ld" Jan 29 12:00:55.829190 kubelet[3290]: E0129 12:00:55.828574 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:00:55.842649 kubelet[3290]: I0129 12:00:55.842011 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4021a8b2-0983-46eb-bc49-b0d6f7f6429e-varrun\") pod \"csi-node-driver-6b7ld\" (UID: \"4021a8b2-0983-46eb-bc49-b0d6f7f6429e\") " pod="calico-system/csi-node-driver-6b7ld" Jan 29 12:00:55.842649 kubelet[3290]: I0129 12:00:55.842063 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4021a8b2-0983-46eb-bc49-b0d6f7f6429e-registration-dir\") pod \"csi-node-driver-6b7ld\" (UID: \"4021a8b2-0983-46eb-bc49-b0d6f7f6429e\") " pod="calico-system/csi-node-driver-6b7ld" Jan 29 12:00:55.842649 kubelet[3290]: I0129 12:00:55.842093 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4021a8b2-0983-46eb-bc49-b0d6f7f6429e-socket-dir\") pod \"csi-node-driver-6b7ld\" (UID: \"4021a8b2-0983-46eb-bc49-b0d6f7f6429e\") " pod="calico-system/csi-node-driver-6b7ld" Jan 29 12:00:55.842649 kubelet[3290]: I0129 12:00:55.842113 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4021a8b2-0983-46eb-bc49-b0d6f7f6429e-kubelet-dir\") pod \"csi-node-driver-6b7ld\" (UID: \"4021a8b2-0983-46eb-bc49-b0d6f7f6429e\") " pod="calico-system/csi-node-driver-6b7ld" Jan 29 12:00:55.842649 kubelet[3290]: I0129 12:00:55.842134 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbptj\" (UniqueName: \"kubernetes.io/projected/4021a8b2-0983-46eb-bc49-b0d6f7f6429e-kube-api-access-lbptj\") pod \"csi-node-driver-6b7ld\" (UID: \"4021a8b2-0983-46eb-bc49-b0d6f7f6429e\") " pod="calico-system/csi-node-driver-6b7ld" Jan 29 12:00:55.845825 kubelet[3290]: E0129 12:00:55.845122 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.845825 kubelet[3290]: W0129 12:00:55.845147 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.845825 kubelet[3290]: E0129 12:00:55.845169 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.845825 kubelet[3290]: E0129 12:00:55.845315 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.845825 kubelet[3290]: W0129 12:00:55.845323 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.845825 kubelet[3290]: E0129 12:00:55.845331 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.845825 kubelet[3290]: E0129 12:00:55.845447 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.845825 kubelet[3290]: W0129 12:00:55.845453 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.845825 kubelet[3290]: E0129 12:00:55.845461 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.845825 kubelet[3290]: E0129 12:00:55.845667 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.846155 kubelet[3290]: W0129 12:00:55.845675 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.846155 kubelet[3290]: E0129 12:00:55.845684 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.850001 kubelet[3290]: E0129 12:00:55.849247 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.850001 kubelet[3290]: W0129 12:00:55.849272 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.850001 kubelet[3290]: E0129 12:00:55.849289 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.872231 kubelet[3290]: E0129 12:00:55.871966 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.872231 kubelet[3290]: W0129 12:00:55.872000 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.872231 kubelet[3290]: E0129 12:00:55.872024 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.919902 containerd[1682]: time="2025-01-29T12:00:55.919837253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7c6f9846-ggnwh,Uid:a4990856-fda6-464c-b9c9-3a624bb6b4bd,Namespace:calico-system,Attempt:0,}" Jan 29 12:00:55.944720 kubelet[3290]: E0129 12:00:55.944676 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.944720 kubelet[3290]: W0129 12:00:55.944708 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.944894 kubelet[3290]: E0129 12:00:55.944744 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.945042 kubelet[3290]: E0129 12:00:55.945017 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.945085 kubelet[3290]: W0129 12:00:55.945044 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.945085 kubelet[3290]: E0129 12:00:55.945073 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.945574 kubelet[3290]: E0129 12:00:55.945541 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.945574 kubelet[3290]: W0129 12:00:55.945559 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.945574 kubelet[3290]: E0129 12:00:55.945575 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.946380 kubelet[3290]: E0129 12:00:55.946352 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.946444 kubelet[3290]: W0129 12:00:55.946375 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.946653 kubelet[3290]: E0129 12:00:55.946495 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.946715 kubelet[3290]: E0129 12:00:55.946692 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.946715 kubelet[3290]: W0129 12:00:55.946704 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.947147 kubelet[3290]: E0129 12:00:55.947120 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.948145 kubelet[3290]: E0129 12:00:55.947479 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.948229 kubelet[3290]: W0129 12:00:55.948153 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.948516 kubelet[3290]: E0129 12:00:55.948290 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.948595 kubelet[3290]: E0129 12:00:55.948567 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.948595 kubelet[3290]: W0129 12:00:55.948582 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.948719 kubelet[3290]: E0129 12:00:55.948687 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.948977 kubelet[3290]: E0129 12:00:55.948948 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.948977 kubelet[3290]: W0129 12:00:55.948966 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.949220 kubelet[3290]: E0129 12:00:55.949194 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.949875 kubelet[3290]: E0129 12:00:55.949825 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.949875 kubelet[3290]: W0129 12:00:55.949849 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.950056 kubelet[3290]: E0129 12:00:55.949965 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.950130 kubelet[3290]: E0129 12:00:55.950110 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.950163 kubelet[3290]: W0129 12:00:55.950124 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.950312 kubelet[3290]: E0129 12:00:55.950282 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.950363 kubelet[3290]: E0129 12:00:55.950338 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.950363 kubelet[3290]: W0129 12:00:55.950345 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.950457 kubelet[3290]: E0129 12:00:55.950429 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.950532 kubelet[3290]: E0129 12:00:55.950513 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.950564 kubelet[3290]: W0129 12:00:55.950538 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.950659 kubelet[3290]: E0129 12:00:55.950627 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.950842 kubelet[3290]: E0129 12:00:55.950816 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.950958 kubelet[3290]: W0129 12:00:55.950844 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.950958 kubelet[3290]: E0129 12:00:55.950884 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.951290 kubelet[3290]: E0129 12:00:55.951249 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.951290 kubelet[3290]: W0129 12:00:55.951271 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.951290 kubelet[3290]: E0129 12:00:55.951289 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.952093 kubelet[3290]: E0129 12:00:55.952063 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.952093 kubelet[3290]: W0129 12:00:55.952094 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.952699 kubelet[3290]: E0129 12:00:55.952669 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.954438 kubelet[3290]: E0129 12:00:55.952955 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.954438 kubelet[3290]: W0129 12:00:55.952971 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.954438 kubelet[3290]: E0129 12:00:55.953810 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.957198 kubelet[3290]: E0129 12:00:55.956887 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.957198 kubelet[3290]: W0129 12:00:55.956908 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.958242 kubelet[3290]: E0129 12:00:55.958076 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.958771 kubelet[3290]: E0129 12:00:55.958625 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.958771 kubelet[3290]: W0129 12:00:55.958640 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.958990 kubelet[3290]: E0129 12:00:55.958908 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.958990 kubelet[3290]: W0129 12:00:55.958920 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.959397 kubelet[3290]: E0129 12:00:55.959271 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.959397 kubelet[3290]: W0129 12:00:55.959285 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.961057 kubelet[3290]: E0129 12:00:55.961018 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.961530 kubelet[3290]: E0129 12:00:55.961061 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.961530 kubelet[3290]: E0129 12:00:55.961456 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.961530 kubelet[3290]: E0129 12:00:55.961468 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.961530 kubelet[3290]: W0129 12:00:55.961490 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.961988 kubelet[3290]: E0129 12:00:55.961789 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.962241 kubelet[3290]: E0129 12:00:55.962200 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.962450 kubelet[3290]: W0129 12:00:55.962215 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.962688 kubelet[3290]: E0129 12:00:55.962630 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.963185 kubelet[3290]: E0129 12:00:55.962980 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.963185 kubelet[3290]: W0129 12:00:55.962993 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.963185 kubelet[3290]: E0129 12:00:55.963068 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.963873 kubelet[3290]: E0129 12:00:55.963678 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.963873 kubelet[3290]: W0129 12:00:55.963692 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.963873 kubelet[3290]: E0129 12:00:55.963713 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.964597 kubelet[3290]: E0129 12:00:55.964349 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.964597 kubelet[3290]: W0129 12:00:55.964363 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.964597 kubelet[3290]: E0129 12:00:55.964377 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:55.982365 containerd[1682]: time="2025-01-29T12:00:55.981309701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:55.982953 containerd[1682]: time="2025-01-29T12:00:55.982858944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:55.983073 containerd[1682]: time="2025-01-29T12:00:55.982891904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:55.985098 containerd[1682]: time="2025-01-29T12:00:55.985026667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:55.985194 kubelet[3290]: E0129 12:00:55.985141 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:55.985194 kubelet[3290]: W0129 12:00:55.985158 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:55.985194 kubelet[3290]: E0129 12:00:55.985180 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:56.009890 systemd[1]: Started cri-containerd-e9f87a59f05021f91f626871384170b03f53b837fdacddfbca5ed1562b41b88f.scope - libcontainer container e9f87a59f05021f91f626871384170b03f53b837fdacddfbca5ed1562b41b88f. Jan 29 12:00:56.015964 containerd[1682]: time="2025-01-29T12:00:56.015913751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srwq9,Uid:342cbe97-f22f-4415-ba7b-58f5a684e722,Namespace:calico-system,Attempt:0,}" Jan 29 12:00:56.070596 containerd[1682]: time="2025-01-29T12:00:56.070424070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:56.070596 containerd[1682]: time="2025-01-29T12:00:56.070510590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:56.071007 containerd[1682]: time="2025-01-29T12:00:56.070527470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:56.073227 containerd[1682]: time="2025-01-29T12:00:56.072956313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7c6f9846-ggnwh,Uid:a4990856-fda6-464c-b9c9-3a624bb6b4bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9f87a59f05021f91f626871384170b03f53b837fdacddfbca5ed1562b41b88f\"" Jan 29 12:00:56.074179 containerd[1682]: time="2025-01-29T12:00:56.073345274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:56.077515 containerd[1682]: time="2025-01-29T12:00:56.077436360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:00:56.102825 systemd[1]: Started cri-containerd-d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5.scope - libcontainer container d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5. Jan 29 12:00:56.141691 containerd[1682]: time="2025-01-29T12:00:56.141344771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srwq9,Uid:342cbe97-f22f-4415-ba7b-58f5a684e722,Namespace:calico-system,Attempt:0,} returns sandbox id \"d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5\"" Jan 29 12:00:57.131004 kubelet[3290]: E0129 12:00:57.130906 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:00:57.558069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700744991.mount: Deactivated successfully. Jan 29 12:00:58.253325 containerd[1682]: time="2025-01-29T12:00:58.253270208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:58.256643 containerd[1682]: time="2025-01-29T12:00:58.256594572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 12:00:58.260136 containerd[1682]: time="2025-01-29T12:00:58.260014417Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:58.265340 containerd[1682]: time="2025-01-29T12:00:58.265271785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:58.266451 containerd[1682]: time="2025-01-29T12:00:58.265928866Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.188442946s" Jan 29 12:00:58.266451 containerd[1682]: time="2025-01-29T12:00:58.265962466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 12:00:58.267989 containerd[1682]: time="2025-01-29T12:00:58.267461988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:00:58.279468 containerd[1682]: time="2025-01-29T12:00:58.279422965Z" level=info msg="CreateContainer within sandbox \"e9f87a59f05021f91f626871384170b03f53b837fdacddfbca5ed1562b41b88f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:00:58.325083 containerd[1682]: time="2025-01-29T12:00:58.325035431Z" level=info msg="CreateContainer within sandbox \"e9f87a59f05021f91f626871384170b03f53b837fdacddfbca5ed1562b41b88f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"22241bfbaf8a0e7be17f1c581c6a9f82815d1e1163a898e835c56b60637b5eb1\"" Jan 29 12:00:58.326054 containerd[1682]: time="2025-01-29T12:00:58.325775272Z" level=info msg="StartContainer for \"22241bfbaf8a0e7be17f1c581c6a9f82815d1e1163a898e835c56b60637b5eb1\"" Jan 29 12:00:58.355840 systemd[1]: Started cri-containerd-22241bfbaf8a0e7be17f1c581c6a9f82815d1e1163a898e835c56b60637b5eb1.scope - libcontainer container 22241bfbaf8a0e7be17f1c581c6a9f82815d1e1163a898e835c56b60637b5eb1. Jan 29 12:00:58.392140 containerd[1682]: time="2025-01-29T12:00:58.392018087Z" level=info msg="StartContainer for \"22241bfbaf8a0e7be17f1c581c6a9f82815d1e1163a898e835c56b60637b5eb1\" returns successfully" Jan 29 12:00:59.131635 kubelet[3290]: E0129 12:00:59.131487 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:00:59.255533 kubelet[3290]: E0129 12:00:59.255405 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.255533 kubelet[3290]: W0129 12:00:59.255429 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.255533 kubelet[3290]: E0129 12:00:59.255449 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.255826 kubelet[3290]: E0129 12:00:59.255813 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.255884 kubelet[3290]: W0129 12:00:59.255874 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.256018 kubelet[3290]: E0129 12:00:59.255935 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.256218 kubelet[3290]: E0129 12:00:59.256118 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.256218 kubelet[3290]: W0129 12:00:59.256130 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.256218 kubelet[3290]: E0129 12:00:59.256140 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.256407 kubelet[3290]: E0129 12:00:59.256394 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.256468 kubelet[3290]: W0129 12:00:59.256457 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.256523 kubelet[3290]: E0129 12:00:59.256512 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.256847 kubelet[3290]: E0129 12:00:59.256765 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.256847 kubelet[3290]: W0129 12:00:59.256777 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.256847 kubelet[3290]: E0129 12:00:59.256787 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.257207 kubelet[3290]: E0129 12:00:59.257111 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.257207 kubelet[3290]: W0129 12:00:59.257124 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.257207 kubelet[3290]: E0129 12:00:59.257134 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.257549 kubelet[3290]: E0129 12:00:59.257535 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.257654 kubelet[3290]: W0129 12:00:59.257639 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.257801 kubelet[3290]: E0129 12:00:59.257710 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.257936 kubelet[3290]: E0129 12:00:59.257925 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.257995 kubelet[3290]: W0129 12:00:59.257984 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.258048 kubelet[3290]: E0129 12:00:59.258038 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.258381 kubelet[3290]: E0129 12:00:59.258291 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.258381 kubelet[3290]: W0129 12:00:59.258302 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.258381 kubelet[3290]: E0129 12:00:59.258312 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.258537 kubelet[3290]: E0129 12:00:59.258526 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.258639 kubelet[3290]: W0129 12:00:59.258611 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.258815 kubelet[3290]: E0129 12:00:59.258696 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.259045 kubelet[3290]: E0129 12:00:59.258918 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.259045 kubelet[3290]: W0129 12:00:59.258929 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.259045 kubelet[3290]: E0129 12:00:59.258939 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.259201 kubelet[3290]: E0129 12:00:59.259190 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.259254 kubelet[3290]: W0129 12:00:59.259244 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.259470 kubelet[3290]: E0129 12:00:59.259300 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.259799 kubelet[3290]: E0129 12:00:59.259784 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.260395 kubelet[3290]: W0129 12:00:59.259840 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.260395 kubelet[3290]: E0129 12:00:59.259853 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.260395 kubelet[3290]: E0129 12:00:59.260095 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.261209 kubelet[3290]: W0129 12:00:59.260106 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.261209 kubelet[3290]: E0129 12:00:59.260521 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.261420 kubelet[3290]: E0129 12:00:59.261387 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.261574 kubelet[3290]: W0129 12:00:59.261496 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.261574 kubelet[3290]: E0129 12:00:59.261515 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.271237 kubelet[3290]: E0129 12:00:59.271201 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.271237 kubelet[3290]: W0129 12:00:59.271227 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.271878 kubelet[3290]: E0129 12:00:59.271247 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.271878 kubelet[3290]: E0129 12:00:59.271799 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.271878 kubelet[3290]: W0129 12:00:59.271809 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.271878 kubelet[3290]: E0129 12:00:59.271824 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.272067 kubelet[3290]: E0129 12:00:59.272057 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.272067 kubelet[3290]: W0129 12:00:59.272066 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.272122 kubelet[3290]: E0129 12:00:59.272082 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.272328 kubelet[3290]: E0129 12:00:59.272310 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.272328 kubelet[3290]: W0129 12:00:59.272323 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.272534 kubelet[3290]: E0129 12:00:59.272342 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.272570 kubelet[3290]: E0129 12:00:59.272542 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.272570 kubelet[3290]: W0129 12:00:59.272549 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.272570 kubelet[3290]: E0129 12:00:59.272558 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.272869 kubelet[3290]: E0129 12:00:59.272719 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.272869 kubelet[3290]: W0129 12:00:59.272732 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.272869 kubelet[3290]: E0129 12:00:59.272741 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.273141 kubelet[3290]: E0129 12:00:59.273010 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.273238 kubelet[3290]: W0129 12:00:59.273182 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.273238 kubelet[3290]: E0129 12:00:59.273210 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.273433 kubelet[3290]: E0129 12:00:59.273408 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.273433 kubelet[3290]: W0129 12:00:59.273425 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.273433 kubelet[3290]: E0129 12:00:59.273441 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.273685 kubelet[3290]: E0129 12:00:59.273608 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.273685 kubelet[3290]: W0129 12:00:59.273631 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.273685 kubelet[3290]: E0129 12:00:59.273640 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.273879 kubelet[3290]: E0129 12:00:59.273827 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.273879 kubelet[3290]: W0129 12:00:59.273835 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.273879 kubelet[3290]: E0129 12:00:59.273846 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.274079 kubelet[3290]: E0129 12:00:59.274058 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.274079 kubelet[3290]: W0129 12:00:59.274071 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.274483 kubelet[3290]: E0129 12:00:59.274160 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.274738 kubelet[3290]: E0129 12:00:59.274719 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.274738 kubelet[3290]: W0129 12:00:59.274735 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.274913 kubelet[3290]: E0129 12:00:59.274824 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.275027 kubelet[3290]: E0129 12:00:59.275003 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.275027 kubelet[3290]: W0129 12:00:59.275022 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.275272 kubelet[3290]: E0129 12:00:59.275235 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.275341 kubelet[3290]: E0129 12:00:59.275322 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.275341 kubelet[3290]: W0129 12:00:59.275337 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.275406 kubelet[3290]: E0129 12:00:59.275351 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.275562 kubelet[3290]: E0129 12:00:59.275544 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.275562 kubelet[3290]: W0129 12:00:59.275559 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.275645 kubelet[3290]: E0129 12:00:59.275572 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.275802 kubelet[3290]: E0129 12:00:59.275784 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.275802 kubelet[3290]: W0129 12:00:59.275798 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.275874 kubelet[3290]: E0129 12:00:59.275809 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.276025 kubelet[3290]: E0129 12:00:59.276007 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.276025 kubelet[3290]: W0129 12:00:59.276021 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.276092 kubelet[3290]: E0129 12:00:59.276030 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.276385 kubelet[3290]: E0129 12:00:59.276340 3290 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:00:59.276432 kubelet[3290]: W0129 12:00:59.276386 3290 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:00:59.276432 kubelet[3290]: E0129 12:00:59.276399 3290 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:00:59.599930 containerd[1682]: time="2025-01-29T12:00:59.599803867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:59.603214 containerd[1682]: time="2025-01-29T12:00:59.602725852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 12:00:59.611297 containerd[1682]: time="2025-01-29T12:00:59.611260245Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:59.616927 containerd[1682]: time="2025-01-29T12:00:59.616441049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:59.616927 containerd[1682]: time="2025-01-29T12:00:59.616829933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.349329064s" Jan 29 12:00:59.616927 containerd[1682]: time="2025-01-29T12:00:59.616859453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 12:00:59.619545 containerd[1682]: time="2025-01-29T12:00:59.619410355Z" level=info msg="CreateContainer within sandbox \"d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:00:59.657477 containerd[1682]: time="2025-01-29T12:00:59.657425040Z" level=info msg="CreateContainer within sandbox \"d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a\"" Jan 29 12:00:59.658325 containerd[1682]: time="2025-01-29T12:00:59.658269527Z" level=info msg="StartContainer for \"32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a\"" Jan 29 12:00:59.693879 systemd[1]: run-containerd-runc-k8s.io-32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a-runc.BGAlxM.mount: Deactivated successfully. Jan 29 12:00:59.704825 systemd[1]: Started cri-containerd-32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a.scope - libcontainer container 32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a. Jan 29 12:00:59.739287 containerd[1682]: time="2025-01-29T12:00:59.739241059Z" level=info msg="StartContainer for \"32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a\" returns successfully" Jan 29 12:00:59.747253 systemd[1]: cri-containerd-32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a.scope: Deactivated successfully. Jan 29 12:00:59.771494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a-rootfs.mount: Deactivated successfully. Jan 29 12:01:00.216531 kubelet[3290]: I0129 12:01:00.216142 3290 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:01:00.240404 kubelet[3290]: I0129 12:01:00.240332 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f7c6f9846-ggnwh" podStartSLOduration=3.050303436 podStartE2EDuration="5.240313664s" podCreationTimestamp="2025-01-29 12:00:55 +0000 UTC" firstStartedPulling="2025-01-29 12:00:56.076720839 +0000 UTC m=+22.081878166" lastFinishedPulling="2025-01-29 12:00:58.266731067 +0000 UTC m=+24.271888394" observedRunningTime="2025-01-29 12:00:59.229925504 +0000 UTC m=+25.235082831" watchObservedRunningTime="2025-01-29 12:01:00.240313664 +0000 UTC m=+26.245470951" Jan 29 12:01:00.611556 containerd[1682]: time="2025-01-29T12:01:00.611407917Z" level=info msg="shim disconnected" id=32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a namespace=k8s.io Jan 29 12:01:00.611556 containerd[1682]: time="2025-01-29T12:01:00.611483837Z" level=warning msg="cleaning up after shim disconnected" id=32347cec4ceb9ec1edc1fd4b2af9c893398c22f2313368be4801657e9936bd9a namespace=k8s.io Jan 29 12:01:00.611556 containerd[1682]: time="2025-01-29T12:01:00.611495878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:01.131469 kubelet[3290]: E0129 12:01:01.131403 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:01:01.221862 containerd[1682]: time="2025-01-29T12:01:01.221626495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:01:03.131427 kubelet[3290]: E0129 12:01:03.131353 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:01:05.133139 kubelet[3290]: E0129 12:01:05.131562 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:01:05.254563 containerd[1682]: time="2025-01-29T12:01:05.254510669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:05.257341 containerd[1682]: time="2025-01-29T12:01:05.257276392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 12:01:05.260979 containerd[1682]: time="2025-01-29T12:01:05.260929036Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:05.265910 containerd[1682]: time="2025-01-29T12:01:05.265794121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:05.266635 containerd[1682]: time="2025-01-29T12:01:05.266496282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.044827067s" Jan 29 12:01:05.266635 containerd[1682]: time="2025-01-29T12:01:05.266529682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 12:01:05.270588 containerd[1682]: time="2025-01-29T12:01:05.270460166Z" level=info msg="CreateContainer within sandbox \"d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:01:05.309694 containerd[1682]: time="2025-01-29T12:01:05.309647086Z" level=info msg="CreateContainer within sandbox \"d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed\"" Jan 29 12:01:05.310553 containerd[1682]: time="2025-01-29T12:01:05.310523007Z" level=info msg="StartContainer for \"334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed\"" Jan 29 12:01:05.343809 systemd[1]: Started cri-containerd-334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed.scope - libcontainer container 334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed. Jan 29 12:01:05.376983 containerd[1682]: time="2025-01-29T12:01:05.376912595Z" level=info msg="StartContainer for \"334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed\" returns successfully" Jan 29 12:01:05.567859 kubelet[3290]: I0129 12:01:05.567563 3290 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:01:06.910128 containerd[1682]: time="2025-01-29T12:01:06.910068996Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:01:06.912057 systemd[1]: cri-containerd-334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed.scope: Deactivated successfully. Jan 29 12:01:06.934778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed-rootfs.mount: Deactivated successfully. Jan 29 12:01:06.949347 kubelet[3290]: I0129 12:01:06.949119 3290 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:01:07.086217 kubelet[3290]: I0129 12:01:06.985122 3290 topology_manager.go:215] "Topology Admit Handler" podUID="5cce7f93-5a7f-4e55-81c0-95de17e446e6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-krvv7" Jan 29 12:01:07.086217 kubelet[3290]: I0129 12:01:06.994712 3290 topology_manager.go:215] "Topology Admit Handler" podUID="c7fa59ba-a213-4b70-ae70-9ad0728fde63" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s7b7m" Jan 29 12:01:07.086217 kubelet[3290]: I0129 12:01:06.994910 3290 topology_manager.go:215] "Topology Admit Handler" podUID="5b402940-58cf-4f1a-a6f1-92b0dbe92e45" podNamespace="calico-system" podName="calico-kube-controllers-679c5f8dc4-tpdkk" Jan 29 12:01:07.086217 kubelet[3290]: I0129 12:01:06.995004 3290 topology_manager.go:215] "Topology Admit Handler" podUID="bfd5f24b-a59e-4991-be63-4ecca56f7298" podNamespace="calico-apiserver" podName="calico-apiserver-5b958f5bc6-wzcct" Jan 29 12:01:07.086217 kubelet[3290]: I0129 12:01:06.995142 3290 topology_manager.go:215] "Topology Admit Handler" podUID="acc59e98-b69f-4ebb-bb77-4faa7d499339" podNamespace="calico-apiserver" podName="calico-apiserver-5b958f5bc6-dd9vs" Jan 29 12:01:06.999399 systemd[1]: Created slice kubepods-burstable-pod5cce7f93_5a7f_4e55_81c0_95de17e446e6.slice - libcontainer container kubepods-burstable-pod5cce7f93_5a7f_4e55_81c0_95de17e446e6.slice. Jan 29 12:01:07.009014 systemd[1]: Created slice kubepods-besteffort-podbfd5f24b_a59e_4991_be63_4ecca56f7298.slice - libcontainer container kubepods-besteffort-podbfd5f24b_a59e_4991_be63_4ecca56f7298.slice. Jan 29 12:01:07.019428 systemd[1]: Created slice kubepods-besteffort-podacc59e98_b69f_4ebb_bb77_4faa7d499339.slice - libcontainer container kubepods-besteffort-podacc59e98_b69f_4ebb_bb77_4faa7d499339.slice. Jan 29 12:01:07.028044 systemd[1]: Created slice kubepods-besteffort-pod5b402940_58cf_4f1a_a6f1_92b0dbe92e45.slice - libcontainer container kubepods-besteffort-pod5b402940_58cf_4f1a_a6f1_92b0dbe92e45.slice. Jan 29 12:01:07.036222 systemd[1]: Created slice kubepods-burstable-podc7fa59ba_a213_4b70_ae70_9ad0728fde63.slice - libcontainer container kubepods-burstable-podc7fa59ba_a213_4b70_ae70_9ad0728fde63.slice. Jan 29 12:01:07.121114 kubelet[3290]: I0129 12:01:07.121067 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69cv4\" (UniqueName: \"kubernetes.io/projected/5b402940-58cf-4f1a-a6f1-92b0dbe92e45-kube-api-access-69cv4\") pod \"calico-kube-controllers-679c5f8dc4-tpdkk\" (UID: \"5b402940-58cf-4f1a-a6f1-92b0dbe92e45\") " pod="calico-system/calico-kube-controllers-679c5f8dc4-tpdkk" Jan 29 12:01:07.121114 kubelet[3290]: I0129 12:01:07.121119 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhh4\" (UniqueName: \"kubernetes.io/projected/bfd5f24b-a59e-4991-be63-4ecca56f7298-kube-api-access-vwhh4\") pod \"calico-apiserver-5b958f5bc6-wzcct\" (UID: \"bfd5f24b-a59e-4991-be63-4ecca56f7298\") " pod="calico-apiserver/calico-apiserver-5b958f5bc6-wzcct" Jan 29 12:01:07.121281 kubelet[3290]: I0129 12:01:07.121144 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7fa59ba-a213-4b70-ae70-9ad0728fde63-config-volume\") pod \"coredns-7db6d8ff4d-s7b7m\" (UID: \"c7fa59ba-a213-4b70-ae70-9ad0728fde63\") " pod="kube-system/coredns-7db6d8ff4d-s7b7m" Jan 29 12:01:07.121281 kubelet[3290]: I0129 12:01:07.121164 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b402940-58cf-4f1a-a6f1-92b0dbe92e45-tigera-ca-bundle\") pod \"calico-kube-controllers-679c5f8dc4-tpdkk\" (UID: \"5b402940-58cf-4f1a-a6f1-92b0dbe92e45\") " pod="calico-system/calico-kube-controllers-679c5f8dc4-tpdkk" Jan 29 12:01:07.121281 kubelet[3290]: I0129 12:01:07.121183 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bfd5f24b-a59e-4991-be63-4ecca56f7298-calico-apiserver-certs\") pod \"calico-apiserver-5b958f5bc6-wzcct\" (UID: \"bfd5f24b-a59e-4991-be63-4ecca56f7298\") " pod="calico-apiserver/calico-apiserver-5b958f5bc6-wzcct" Jan 29 12:01:07.121281 kubelet[3290]: I0129 12:01:07.121199 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdng8\" (UniqueName: \"kubernetes.io/projected/c7fa59ba-a213-4b70-ae70-9ad0728fde63-kube-api-access-fdng8\") pod \"coredns-7db6d8ff4d-s7b7m\" (UID: \"c7fa59ba-a213-4b70-ae70-9ad0728fde63\") " pod="kube-system/coredns-7db6d8ff4d-s7b7m" Jan 29 12:01:07.121281 kubelet[3290]: I0129 12:01:07.121217 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwlnc\" (UniqueName: \"kubernetes.io/projected/acc59e98-b69f-4ebb-bb77-4faa7d499339-kube-api-access-zwlnc\") pod \"calico-apiserver-5b958f5bc6-dd9vs\" (UID: \"acc59e98-b69f-4ebb-bb77-4faa7d499339\") " pod="calico-apiserver/calico-apiserver-5b958f5bc6-dd9vs" Jan 29 12:01:07.121399 kubelet[3290]: I0129 12:01:07.121240 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fm6q\" (UniqueName: \"kubernetes.io/projected/5cce7f93-5a7f-4e55-81c0-95de17e446e6-kube-api-access-6fm6q\") pod \"coredns-7db6d8ff4d-krvv7\" (UID: \"5cce7f93-5a7f-4e55-81c0-95de17e446e6\") " pod="kube-system/coredns-7db6d8ff4d-krvv7" Jan 29 12:01:07.121399 kubelet[3290]: I0129 12:01:07.121256 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cce7f93-5a7f-4e55-81c0-95de17e446e6-config-volume\") pod \"coredns-7db6d8ff4d-krvv7\" (UID: \"5cce7f93-5a7f-4e55-81c0-95de17e446e6\") " pod="kube-system/coredns-7db6d8ff4d-krvv7" Jan 29 12:01:07.121399 kubelet[3290]: I0129 12:01:07.121274 3290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/acc59e98-b69f-4ebb-bb77-4faa7d499339-calico-apiserver-certs\") pod \"calico-apiserver-5b958f5bc6-dd9vs\" (UID: \"acc59e98-b69f-4ebb-bb77-4faa7d499339\") " pod="calico-apiserver/calico-apiserver-5b958f5bc6-dd9vs" Jan 29 12:01:07.137685 systemd[1]: Created slice kubepods-besteffort-pod4021a8b2_0983_46eb_bc49_b0d6f7f6429e.slice - libcontainer container kubepods-besteffort-pod4021a8b2_0983_46eb_bc49_b0d6f7f6429e.slice. Jan 29 12:01:07.140036 containerd[1682]: time="2025-01-29T12:01:07.139988329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b7ld,Uid:4021a8b2-0983-46eb-bc49-b0d6f7f6429e,Namespace:calico-system,Attempt:0,}" Jan 29 12:01:07.385536 containerd[1682]: time="2025-01-29T12:01:07.385494809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krvv7,Uid:5cce7f93-5a7f-4e55-81c0-95de17e446e6,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:07.390524 containerd[1682]: time="2025-01-29T12:01:07.390229458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-dd9vs,Uid:acc59e98-b69f-4ebb-bb77-4faa7d499339,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:01:07.390972 containerd[1682]: time="2025-01-29T12:01:07.390858659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-wzcct,Uid:bfd5f24b-a59e-4991-be63-4ecca56f7298,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:01:07.391331 containerd[1682]: time="2025-01-29T12:01:07.390922979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679c5f8dc4-tpdkk,Uid:5b402940-58cf-4f1a-a6f1-92b0dbe92e45,Namespace:calico-system,Attempt:0,}" Jan 29 12:01:07.391545 containerd[1682]: time="2025-01-29T12:01:07.391054499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7b7m,Uid:c7fa59ba-a213-4b70-ae70-9ad0728fde63,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:07.540659 containerd[1682]: time="2025-01-29T12:01:07.540443567Z" level=info msg="shim disconnected" id=334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed namespace=k8s.io Jan 29 12:01:07.540659 containerd[1682]: time="2025-01-29T12:01:07.540519727Z" level=warning msg="cleaning up after shim disconnected" id=334350d50e12ba0c95ae4eca1f28db062b8a8bf54bd211ca444435c47cf381ed namespace=k8s.io Jan 29 12:01:07.540659 containerd[1682]: time="2025-01-29T12:01:07.540528527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:07.844440 containerd[1682]: time="2025-01-29T12:01:07.844231072Z" level=error msg="Failed to destroy network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.846987 containerd[1682]: time="2025-01-29T12:01:07.846846397Z" level=error msg="encountered an error cleaning up failed sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.846987 containerd[1682]: time="2025-01-29T12:01:07.846920837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b7ld,Uid:4021a8b2-0983-46eb-bc49-b0d6f7f6429e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.847755 kubelet[3290]: E0129 12:01:07.847165 3290 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.847755 kubelet[3290]: E0129 12:01:07.847225 3290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6b7ld" Jan 29 12:01:07.847755 kubelet[3290]: E0129 12:01:07.847243 3290 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6b7ld" Jan 29 12:01:07.847995 kubelet[3290]: E0129 12:01:07.847287 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6b7ld_calico-system(4021a8b2-0983-46eb-bc49-b0d6f7f6429e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6b7ld_calico-system(4021a8b2-0983-46eb-bc49-b0d6f7f6429e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:01:07.856698 containerd[1682]: time="2025-01-29T12:01:07.856636174Z" level=error msg="Failed to destroy network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.858027 containerd[1682]: time="2025-01-29T12:01:07.857936696Z" level=error msg="encountered an error cleaning up failed sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.858027 containerd[1682]: time="2025-01-29T12:01:07.858010897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679c5f8dc4-tpdkk,Uid:5b402940-58cf-4f1a-a6f1-92b0dbe92e45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.860201 kubelet[3290]: E0129 12:01:07.859732 3290 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.861085 kubelet[3290]: E0129 12:01:07.861049 3290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-679c5f8dc4-tpdkk" Jan 29 12:01:07.861167 kubelet[3290]: E0129 12:01:07.861092 3290 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-679c5f8dc4-tpdkk" Jan 29 12:01:07.861167 kubelet[3290]: E0129 12:01:07.861136 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-679c5f8dc4-tpdkk_calico-system(5b402940-58cf-4f1a-a6f1-92b0dbe92e45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-679c5f8dc4-tpdkk_calico-system(5b402940-58cf-4f1a-a6f1-92b0dbe92e45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-679c5f8dc4-tpdkk" podUID="5b402940-58cf-4f1a-a6f1-92b0dbe92e45" Jan 29 12:01:07.872717 containerd[1682]: time="2025-01-29T12:01:07.872450122Z" level=error msg="Failed to destroy network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.873442 containerd[1682]: time="2025-01-29T12:01:07.873396284Z" level=error msg="encountered an error cleaning up failed sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.873733 containerd[1682]: time="2025-01-29T12:01:07.873701325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-dd9vs,Uid:acc59e98-b69f-4ebb-bb77-4faa7d499339,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.874087 kubelet[3290]: E0129 12:01:07.874041 3290 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.874176 kubelet[3290]: E0129 12:01:07.874105 3290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b958f5bc6-dd9vs" Jan 29 12:01:07.874176 kubelet[3290]: E0129 12:01:07.874125 3290 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b958f5bc6-dd9vs" Jan 29 12:01:07.874229 kubelet[3290]: E0129 12:01:07.874164 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b958f5bc6-dd9vs_calico-apiserver(acc59e98-b69f-4ebb-bb77-4faa7d499339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b958f5bc6-dd9vs_calico-apiserver(acc59e98-b69f-4ebb-bb77-4faa7d499339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b958f5bc6-dd9vs" podUID="acc59e98-b69f-4ebb-bb77-4faa7d499339" Jan 29 12:01:07.879040 containerd[1682]: time="2025-01-29T12:01:07.878922294Z" level=error msg="Failed to destroy network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.879357 containerd[1682]: time="2025-01-29T12:01:07.879255655Z" level=error msg="encountered an error cleaning up failed sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.879357 containerd[1682]: time="2025-01-29T12:01:07.879311255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-wzcct,Uid:bfd5f24b-a59e-4991-be63-4ecca56f7298,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.879593 kubelet[3290]: E0129 12:01:07.879503 3290 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.879593 kubelet[3290]: E0129 12:01:07.879584 3290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b958f5bc6-wzcct" Jan 29 12:01:07.879992 kubelet[3290]: E0129 12:01:07.879958 3290 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b958f5bc6-wzcct" Jan 29 12:01:07.880402 kubelet[3290]: E0129 12:01:07.880041 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b958f5bc6-wzcct_calico-apiserver(bfd5f24b-a59e-4991-be63-4ecca56f7298)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b958f5bc6-wzcct_calico-apiserver(bfd5f24b-a59e-4991-be63-4ecca56f7298)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b958f5bc6-wzcct" podUID="bfd5f24b-a59e-4991-be63-4ecca56f7298" Jan 29 12:01:07.891408 containerd[1682]: time="2025-01-29T12:01:07.891264076Z" level=error msg="Failed to destroy network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.891774 containerd[1682]: time="2025-01-29T12:01:07.891605477Z" level=error msg="encountered an error cleaning up failed sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.891774 containerd[1682]: time="2025-01-29T12:01:07.891691917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krvv7,Uid:5cce7f93-5a7f-4e55-81c0-95de17e446e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.892020 kubelet[3290]: E0129 12:01:07.891945 3290 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.892020 kubelet[3290]: E0129 12:01:07.892001 3290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krvv7" Jan 29 12:01:07.892104 kubelet[3290]: E0129 12:01:07.892022 3290 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-krvv7" Jan 29 12:01:07.892104 kubelet[3290]: E0129 12:01:07.892065 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-krvv7_kube-system(5cce7f93-5a7f-4e55-81c0-95de17e446e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-krvv7_kube-system(5cce7f93-5a7f-4e55-81c0-95de17e446e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-krvv7" podUID="5cce7f93-5a7f-4e55-81c0-95de17e446e6" Jan 29 12:01:07.899234 containerd[1682]: time="2025-01-29T12:01:07.899132090Z" level=error msg="Failed to destroy network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.899476 containerd[1682]: time="2025-01-29T12:01:07.899442051Z" level=error msg="encountered an error cleaning up failed sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.899528 containerd[1682]: time="2025-01-29T12:01:07.899507331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7b7m,Uid:c7fa59ba-a213-4b70-ae70-9ad0728fde63,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.899797 kubelet[3290]: E0129 12:01:07.899747 3290 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:07.900674 kubelet[3290]: E0129 12:01:07.899902 3290 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s7b7m" Jan 29 12:01:07.900674 kubelet[3290]: E0129 12:01:07.899927 3290 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s7b7m" Jan 29 12:01:07.900674 kubelet[3290]: E0129 12:01:07.899977 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s7b7m_kube-system(c7fa59ba-a213-4b70-ae70-9ad0728fde63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s7b7m_kube-system(c7fa59ba-a213-4b70-ae70-9ad0728fde63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s7b7m" podUID="c7fa59ba-a213-4b70-ae70-9ad0728fde63" Jan 29 12:01:08.240574 kubelet[3290]: I0129 12:01:08.240519 3290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:08.242061 containerd[1682]: time="2025-01-29T12:01:08.241453264Z" level=info msg="StopPodSandbox for \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\"" Jan 29 12:01:08.242061 containerd[1682]: time="2025-01-29T12:01:08.241646305Z" level=info msg="Ensure that sandbox e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5 in task-service has been cleanup successfully" Jan 29 12:01:08.245061 kubelet[3290]: I0129 12:01:08.244698 3290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:08.246470 containerd[1682]: time="2025-01-29T12:01:08.246425153Z" level=info msg="StopPodSandbox for \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\"" Jan 29 12:01:08.246934 containerd[1682]: time="2025-01-29T12:01:08.246649874Z" level=info msg="Ensure that sandbox fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780 in task-service has been cleanup successfully" Jan 29 12:01:08.248551 kubelet[3290]: I0129 12:01:08.248474 3290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:08.250640 containerd[1682]: time="2025-01-29T12:01:08.249874919Z" level=info msg="StopPodSandbox for \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\"" Jan 29 12:01:08.250640 containerd[1682]: time="2025-01-29T12:01:08.250138320Z" level=info msg="Ensure that sandbox a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4 in task-service has been cleanup successfully" Jan 29 12:01:08.253352 kubelet[3290]: I0129 12:01:08.253297 3290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:08.254041 containerd[1682]: time="2025-01-29T12:01:08.254002567Z" level=info msg="StopPodSandbox for \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\"" Jan 29 12:01:08.255418 containerd[1682]: time="2025-01-29T12:01:08.255373209Z" level=info msg="Ensure that sandbox d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605 in task-service has been cleanup successfully" Jan 29 12:01:08.258059 kubelet[3290]: I0129 12:01:08.257997 3290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:08.261384 containerd[1682]: time="2025-01-29T12:01:08.261130500Z" level=info msg="StopPodSandbox for \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\"" Jan 29 12:01:08.262167 containerd[1682]: time="2025-01-29T12:01:08.262021941Z" level=info msg="Ensure that sandbox c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92 in task-service has been cleanup successfully" Jan 29 12:01:08.262576 kubelet[3290]: I0129 12:01:08.262446 3290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:08.265446 containerd[1682]: time="2025-01-29T12:01:08.265355227Z" level=info msg="StopPodSandbox for \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\"" Jan 29 12:01:08.266942 containerd[1682]: time="2025-01-29T12:01:08.266817030Z" level=info msg="Ensure that sandbox c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3 in task-service has been cleanup successfully" Jan 29 12:01:08.277439 containerd[1682]: time="2025-01-29T12:01:08.277386809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:01:08.331955 containerd[1682]: time="2025-01-29T12:01:08.331780306Z" level=error msg="StopPodSandbox for \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\" failed" error="failed to destroy network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:08.333524 kubelet[3290]: E0129 12:01:08.332044 3290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:08.333524 kubelet[3290]: E0129 12:01:08.332212 3290 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5"} Jan 29 12:01:08.333524 kubelet[3290]: E0129 12:01:08.332557 3290 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bfd5f24b-a59e-4991-be63-4ecca56f7298\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:08.333524 kubelet[3290]: E0129 12:01:08.332594 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bfd5f24b-a59e-4991-be63-4ecca56f7298\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b958f5bc6-wzcct" podUID="bfd5f24b-a59e-4991-be63-4ecca56f7298" Jan 29 12:01:08.350008 containerd[1682]: time="2025-01-29T12:01:08.349952979Z" level=error msg="StopPodSandbox for \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\" failed" error="failed to destroy network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:08.351207 kubelet[3290]: E0129 12:01:08.351162 3290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:08.351325 kubelet[3290]: E0129 12:01:08.351216 3290 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780"} Jan 29 12:01:08.351356 kubelet[3290]: E0129 12:01:08.351254 3290 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5cce7f93-5a7f-4e55-81c0-95de17e446e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:08.351413 kubelet[3290]: E0129 12:01:08.351387 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5cce7f93-5a7f-4e55-81c0-95de17e446e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-krvv7" podUID="5cce7f93-5a7f-4e55-81c0-95de17e446e6" Jan 29 12:01:08.363061 containerd[1682]: time="2025-01-29T12:01:08.363011642Z" level=error msg="StopPodSandbox for \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\" failed" error="failed to destroy network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:08.363472 kubelet[3290]: E0129 12:01:08.363425 3290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:08.363544 kubelet[3290]: E0129 12:01:08.363480 3290 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3"} Jan 29 12:01:08.363544 kubelet[3290]: E0129 12:01:08.363513 3290 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acc59e98-b69f-4ebb-bb77-4faa7d499339\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:08.363544 kubelet[3290]: E0129 12:01:08.363534 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acc59e98-b69f-4ebb-bb77-4faa7d499339\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b958f5bc6-dd9vs" podUID="acc59e98-b69f-4ebb-bb77-4faa7d499339" Jan 29 12:01:08.363972 containerd[1682]: time="2025-01-29T12:01:08.363912844Z" level=error msg="StopPodSandbox for \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\" failed" error="failed to destroy network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:08.364120 kubelet[3290]: E0129 12:01:08.364088 3290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:08.364164 kubelet[3290]: E0129 12:01:08.364137 3290 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92"} Jan 29 12:01:08.364214 kubelet[3290]: E0129 12:01:08.364163 3290 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b402940-58cf-4f1a-a6f1-92b0dbe92e45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:08.364214 kubelet[3290]: E0129 12:01:08.364181 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b402940-58cf-4f1a-a6f1-92b0dbe92e45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-679c5f8dc4-tpdkk" podUID="5b402940-58cf-4f1a-a6f1-92b0dbe92e45" Jan 29 12:01:08.365293 containerd[1682]: time="2025-01-29T12:01:08.365241126Z" level=error msg="StopPodSandbox for \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\" failed" error="failed to destroy network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:08.365479 kubelet[3290]: E0129 12:01:08.365385 3290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:08.365607 kubelet[3290]: E0129 12:01:08.365487 3290 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605"} Jan 29 12:01:08.365663 kubelet[3290]: E0129 12:01:08.365643 3290 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7fa59ba-a213-4b70-ae70-9ad0728fde63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:08.365717 kubelet[3290]: E0129 12:01:08.365668 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7fa59ba-a213-4b70-ae70-9ad0728fde63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s7b7m" podUID="c7fa59ba-a213-4b70-ae70-9ad0728fde63" Jan 29 12:01:08.366580 containerd[1682]: time="2025-01-29T12:01:08.366443729Z" level=error msg="StopPodSandbox for \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\" failed" error="failed to destroy network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:01:08.366851 kubelet[3290]: E0129 12:01:08.366807 3290 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:08.366892 kubelet[3290]: E0129 12:01:08.366852 3290 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4"} Jan 29 12:01:08.366892 kubelet[3290]: E0129 12:01:08.366874 3290 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4021a8b2-0983-46eb-bc49-b0d6f7f6429e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:01:08.366970 kubelet[3290]: E0129 12:01:08.366891 3290 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4021a8b2-0983-46eb-bc49-b0d6f7f6429e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6b7ld" podUID="4021a8b2-0983-46eb-bc49-b0d6f7f6429e" Jan 29 12:01:11.361120 update_engine[1663]: I20250129 12:01:11.361071 1663 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.361486 1663 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.361691 1663 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362049 1663 omaha_request_params.cc:62] Current group set to lts Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362129 1663 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362138 1663 update_attempter.cc:643] Scheduling an action processor start. Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362154 1663 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362180 1663 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362247 1663 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362257 1663 omaha_request_action.cc:272] Request: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: Jan 29 12:01:11.363165 update_engine[1663]: I20250129 12:01:11.362263 1663 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:01:11.363711 locksmithd[1712]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 12:01:11.363915 update_engine[1663]: I20250129 12:01:11.363334 1663 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:01:11.363915 update_engine[1663]: I20250129 12:01:11.363641 1663 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:01:11.503257 update_engine[1663]: E20250129 12:01:11.503196 1663 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:01:11.503387 update_engine[1663]: I20250129 12:01:11.503302 1663 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 12:01:14.457954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544975579.mount: Deactivated successfully. Jan 29 12:01:14.509245 containerd[1682]: time="2025-01-29T12:01:14.509187067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:14.511725 containerd[1682]: time="2025-01-29T12:01:14.511685911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 12:01:14.515248 containerd[1682]: time="2025-01-29T12:01:14.515192637Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:14.519573 containerd[1682]: time="2025-01-29T12:01:14.519534645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:14.520357 containerd[1682]: time="2025-01-29T12:01:14.520183286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.242743717s" Jan 29 12:01:14.520357 containerd[1682]: time="2025-01-29T12:01:14.520222886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 12:01:14.532638 containerd[1682]: time="2025-01-29T12:01:14.532213468Z" level=info msg="CreateContainer within sandbox \"d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:01:14.591989 containerd[1682]: time="2025-01-29T12:01:14.591929375Z" level=info msg="CreateContainer within sandbox \"d265f795f8b4527021664bee9cdb2dd1663950f4183b59710cf452248ed344f5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"976d93c1f8364a5e241dd1f9195d1e22eea09e3ae54572f803791a0b1414ba73\"" Jan 29 12:01:14.593226 containerd[1682]: time="2025-01-29T12:01:14.592512976Z" level=info msg="StartContainer for \"976d93c1f8364a5e241dd1f9195d1e22eea09e3ae54572f803791a0b1414ba73\"" Jan 29 12:01:14.618823 systemd[1]: Started cri-containerd-976d93c1f8364a5e241dd1f9195d1e22eea09e3ae54572f803791a0b1414ba73.scope - libcontainer container 976d93c1f8364a5e241dd1f9195d1e22eea09e3ae54572f803791a0b1414ba73. Jan 29 12:01:14.652981 containerd[1682]: time="2025-01-29T12:01:14.652865670Z" level=info msg="StartContainer for \"976d93c1f8364a5e241dd1f9195d1e22eea09e3ae54572f803791a0b1414ba73\" returns successfully" Jan 29 12:01:14.980371 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:01:14.980500 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:01:15.327738 kubelet[3290]: I0129 12:01:15.327448 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-srwq9" podStartSLOduration=1.948303153 podStartE2EDuration="20.321929101s" podCreationTimestamp="2025-01-29 12:00:55 +0000 UTC" firstStartedPulling="2025-01-29 12:00:56.1474969 +0000 UTC m=+22.152654227" lastFinishedPulling="2025-01-29 12:01:14.521122848 +0000 UTC m=+40.526280175" observedRunningTime="2025-01-29 12:01:15.321316581 +0000 UTC m=+41.326473868" watchObservedRunningTime="2025-01-29 12:01:15.321929101 +0000 UTC m=+41.327086428" Jan 29 12:01:16.675686 kernel: bpftool[4548]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:01:16.917955 systemd-networkd[1466]: vxlan.calico: Link UP Jan 29 12:01:16.917962 systemd-networkd[1466]: vxlan.calico: Gained carrier Jan 29 12:01:18.619809 systemd-networkd[1466]: vxlan.calico: Gained IPv6LL Jan 29 12:01:19.132607 containerd[1682]: time="2025-01-29T12:01:19.132324103Z" level=info msg="StopPodSandbox for \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\"" Jan 29 12:01:19.133298 containerd[1682]: time="2025-01-29T12:01:19.133084704Z" level=info msg="StopPodSandbox for \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\"" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.213 [INFO][4649] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.214 [INFO][4649] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" iface="eth0" netns="/var/run/netns/cni-d3bfefdd-b4e2-947a-bff7-12b6f634ab1d" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.214 [INFO][4649] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" iface="eth0" netns="/var/run/netns/cni-d3bfefdd-b4e2-947a-bff7-12b6f634ab1d" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.216 [INFO][4649] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" iface="eth0" netns="/var/run/netns/cni-d3bfefdd-b4e2-947a-bff7-12b6f634ab1d" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.217 [INFO][4649] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.218 [INFO][4649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.242 [INFO][4661] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.242 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.243 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.250 [WARNING][4661] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.251 [INFO][4661] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.252 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:19.258275 containerd[1682]: 2025-01-29 12:01:19.256 [INFO][4649] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:19.260873 containerd[1682]: time="2025-01-29T12:01:19.258498707Z" level=info msg="TearDown network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\" successfully" Jan 29 12:01:19.260873 containerd[1682]: time="2025-01-29T12:01:19.258526907Z" level=info msg="StopPodSandbox for \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\" returns successfully" Jan 29 12:01:19.263168 systemd[1]: run-netns-cni\x2dd3bfefdd\x2db4e2\x2d947a\x2dbff7\x2d12b6f634ab1d.mount: Deactivated successfully. Jan 29 12:01:19.271833 containerd[1682]: time="2025-01-29T12:01:19.271771684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krvv7,Uid:5cce7f93-5a7f-4e55-81c0-95de17e446e6,Namespace:kube-system,Attempt:1,}" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.219 [INFO][4650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.219 [INFO][4650] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" iface="eth0" netns="/var/run/netns/cni-573f659c-26b6-1aef-d529-f82437834080" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.219 [INFO][4650] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" iface="eth0" netns="/var/run/netns/cni-573f659c-26b6-1aef-d529-f82437834080" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.220 [INFO][4650] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" iface="eth0" netns="/var/run/netns/cni-573f659c-26b6-1aef-d529-f82437834080" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.220 [INFO][4650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.220 [INFO][4650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.246 [INFO][4662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.247 [INFO][4662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.252 [INFO][4662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.267 [WARNING][4662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.267 [INFO][4662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.269 [INFO][4662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:19.273555 containerd[1682]: 2025-01-29 12:01:19.271 [INFO][4650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:19.274472 containerd[1682]: time="2025-01-29T12:01:19.274015527Z" level=info msg="TearDown network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\" successfully" Jan 29 12:01:19.274472 containerd[1682]: time="2025-01-29T12:01:19.274047927Z" level=info msg="StopPodSandbox for \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\" returns successfully" Jan 29 12:01:19.275895 containerd[1682]: time="2025-01-29T12:01:19.275860049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7b7m,Uid:c7fa59ba-a213-4b70-ae70-9ad0728fde63,Namespace:kube-system,Attempt:1,}" Jan 29 12:01:19.277554 systemd[1]: run-netns-cni\x2d573f659c\x2d26b6\x2d1aef\x2dd529\x2df82437834080.mount: Deactivated successfully. Jan 29 12:01:19.482142 systemd-networkd[1466]: cali669f46d71ce: Link UP Jan 29 12:01:19.482370 systemd-networkd[1466]: cali669f46d71ce: Gained carrier Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.379 [INFO][4673] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0 coredns-7db6d8ff4d- kube-system 5cce7f93-5a7f-4e55-81c0-95de17e446e6 753 0 2025-01-29 12:00:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-ecab7ceadc coredns-7db6d8ff4d-krvv7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali669f46d71ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.379 [INFO][4673] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.426 [INFO][4695] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" HandleID="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.439 [INFO][4695] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" HandleID="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-ecab7ceadc", "pod":"coredns-7db6d8ff4d-krvv7", "timestamp":"2025-01-29 12:01:19.426030925 +0000 UTC"}, Hostname:"ci-4081.3.0-a-ecab7ceadc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.440 [INFO][4695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.440 [INFO][4695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.440 [INFO][4695] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-ecab7ceadc' Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.443 [INFO][4695] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.448 [INFO][4695] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.452 [INFO][4695] ipam/ipam.go 489: Trying affinity for 192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.454 [INFO][4695] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.456 [INFO][4695] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.456 [INFO][4695] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.457 [INFO][4695] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26 Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.464 [INFO][4695] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.469 [INFO][4695] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.129/26] block=192.168.64.128/26 handle="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.470 [INFO][4695] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.129/26] handle="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.470 [INFO][4695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:19.502500 containerd[1682]: 2025-01-29 12:01:19.470 [INFO][4695] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.129/26] IPv6=[] ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" HandleID="k8s-pod-network.2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.503068 containerd[1682]: 2025-01-29 12:01:19.472 [INFO][4673] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5cce7f93-5a7f-4e55-81c0-95de17e446e6", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"", Pod:"coredns-7db6d8ff4d-krvv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali669f46d71ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:19.503068 containerd[1682]: 2025-01-29 12:01:19.473 [INFO][4673] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.129/32] ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.503068 containerd[1682]: 2025-01-29 12:01:19.473 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali669f46d71ce ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.503068 containerd[1682]: 2025-01-29 12:01:19.481 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.503068 containerd[1682]: 2025-01-29 12:01:19.481 [INFO][4673] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5cce7f93-5a7f-4e55-81c0-95de17e446e6", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26", Pod:"coredns-7db6d8ff4d-krvv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali669f46d71ce", MAC:"52:34:68:2b:47:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:19.503068 containerd[1682]: 2025-01-29 12:01:19.498 [INFO][4673] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-krvv7" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:19.530330 systemd-networkd[1466]: calif2c3cd21d5d: Link UP Jan 29 12:01:19.531790 systemd-networkd[1466]: calif2c3cd21d5d: Gained carrier Jan 29 12:01:19.546602 containerd[1682]: time="2025-01-29T12:01:19.545195520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:19.546602 containerd[1682]: time="2025-01-29T12:01:19.545251440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:19.546602 containerd[1682]: time="2025-01-29T12:01:19.545262160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:19.546602 containerd[1682]: time="2025-01-29T12:01:19.545345760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.388 [INFO][4682] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0 coredns-7db6d8ff4d- kube-system c7fa59ba-a213-4b70-ae70-9ad0728fde63 754 0 2025-01-29 12:00:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-ecab7ceadc coredns-7db6d8ff4d-s7b7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2c3cd21d5d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.388 [INFO][4682] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.429 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" HandleID="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.446 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" HandleID="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332c80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-ecab7ceadc", "pod":"coredns-7db6d8ff4d-s7b7m", "timestamp":"2025-01-29 12:01:19.42952565 +0000 UTC"}, Hostname:"ci-4081.3.0-a-ecab7ceadc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.446 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.470 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.470 [INFO][4699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-ecab7ceadc' Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.474 [INFO][4699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.479 [INFO][4699] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.497 [INFO][4699] ipam/ipam.go 489: Trying affinity for 192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.501 [INFO][4699] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.506 [INFO][4699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.507 [INFO][4699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.509 [INFO][4699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.514 [INFO][4699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.523 [INFO][4699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.130/26] block=192.168.64.128/26 handle="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.523 [INFO][4699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.130/26] handle="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.523 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:19.557792 containerd[1682]: 2025-01-29 12:01:19.523 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.130/26] IPv6=[] ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" HandleID="k8s-pod-network.0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.558451 containerd[1682]: 2025-01-29 12:01:19.526 [INFO][4682] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c7fa59ba-a213-4b70-ae70-9ad0728fde63", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"", Pod:"coredns-7db6d8ff4d-s7b7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c3cd21d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:19.558451 containerd[1682]: 2025-01-29 12:01:19.526 [INFO][4682] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.130/32] ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.558451 containerd[1682]: 2025-01-29 12:01:19.526 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2c3cd21d5d ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.558451 containerd[1682]: 2025-01-29 12:01:19.532 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.558451 containerd[1682]: 2025-01-29 12:01:19.533 [INFO][4682] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c7fa59ba-a213-4b70-ae70-9ad0728fde63", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e", Pod:"coredns-7db6d8ff4d-s7b7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c3cd21d5d", MAC:"62:b2:fb:ed:97:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:19.558451 containerd[1682]: 2025-01-29 12:01:19.550 [INFO][4682] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s7b7m" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:19.587829 systemd[1]: Started cri-containerd-2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26.scope - libcontainer container 2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26. Jan 29 12:01:19.621821 containerd[1682]: time="2025-01-29T12:01:19.621747860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-krvv7,Uid:5cce7f93-5a7f-4e55-81c0-95de17e446e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26\"" Jan 29 12:01:19.635284 containerd[1682]: time="2025-01-29T12:01:19.635215157Z" level=info msg="CreateContainer within sandbox \"2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:01:20.133293 containerd[1682]: time="2025-01-29T12:01:20.132474405Z" level=info msg="StopPodSandbox for \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\"" Jan 29 12:01:20.133293 containerd[1682]: time="2025-01-29T12:01:20.132984085Z" level=info msg="StopPodSandbox for \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\"" Jan 29 12:01:20.135710 containerd[1682]: time="2025-01-29T12:01:20.134809648Z" level=info msg="StopPodSandbox for \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\"" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.234 [INFO][4820] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.234 [INFO][4820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" iface="eth0" netns="/var/run/netns/cni-9e32e6d9-cc8e-5184-6412-d63a5425adb8" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.235 [INFO][4820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" iface="eth0" netns="/var/run/netns/cni-9e32e6d9-cc8e-5184-6412-d63a5425adb8" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.235 [INFO][4820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" iface="eth0" netns="/var/run/netns/cni-9e32e6d9-cc8e-5184-6412-d63a5425adb8" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.235 [INFO][4820] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.235 [INFO][4820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.256 [INFO][4839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.256 [INFO][4839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.256 [INFO][4839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.268 [WARNING][4839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.268 [INFO][4839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.271 [INFO][4839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:20.275970 containerd[1682]: 2025-01-29 12:01:20.273 [INFO][4820] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:20.281112 containerd[1682]: time="2025-01-29T12:01:20.276805193Z" level=info msg="TearDown network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\" successfully" Jan 29 12:01:20.281112 containerd[1682]: time="2025-01-29T12:01:20.276851153Z" level=info msg="StopPodSandbox for \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\" returns successfully" Jan 29 12:01:20.279451 systemd[1]: run-netns-cni\x2d9e32e6d9\x2dcc8e\x2d5184\x2d6412\x2dd63a5425adb8.mount: Deactivated successfully. Jan 29 12:01:20.282295 containerd[1682]: time="2025-01-29T12:01:20.281610439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b7ld,Uid:4021a8b2-0983-46eb-bc49-b0d6f7f6429e,Namespace:calico-system,Attempt:1,}" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.211 [INFO][4821] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.212 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" iface="eth0" netns="/var/run/netns/cni-c7e9ff6f-b5aa-04e2-d715-1448c3f6befb" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.212 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" iface="eth0" netns="/var/run/netns/cni-c7e9ff6f-b5aa-04e2-d715-1448c3f6befb" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.212 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" iface="eth0" netns="/var/run/netns/cni-c7e9ff6f-b5aa-04e2-d715-1448c3f6befb" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.213 [INFO][4821] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.213 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.274 [INFO][4834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.274 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.275 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.292 [WARNING][4834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.292 [INFO][4834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.294 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:20.297976 containerd[1682]: 2025-01-29 12:01:20.296 [INFO][4821] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:20.301266 systemd[1]: run-netns-cni\x2dc7e9ff6f\x2db5aa\x2d04e2\x2dd715\x2d1448c3f6befb.mount: Deactivated successfully. Jan 29 12:01:20.301716 containerd[1682]: time="2025-01-29T12:01:20.301405145Z" level=info msg="TearDown network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\" successfully" Jan 29 12:01:20.301716 containerd[1682]: time="2025-01-29T12:01:20.301438145Z" level=info msg="StopPodSandbox for \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\" returns successfully" Jan 29 12:01:20.303987 containerd[1682]: time="2025-01-29T12:01:20.303940348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-wzcct,Uid:bfd5f24b-a59e-4991-be63-4ecca56f7298,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.229 [INFO][4810] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.229 [INFO][4810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" iface="eth0" netns="/var/run/netns/cni-f20d7f5b-cd25-2ffd-a47f-d9040536b580" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.231 [INFO][4810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" iface="eth0" netns="/var/run/netns/cni-f20d7f5b-cd25-2ffd-a47f-d9040536b580" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.232 [INFO][4810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" iface="eth0" netns="/var/run/netns/cni-f20d7f5b-cd25-2ffd-a47f-d9040536b580" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.232 [INFO][4810] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.232 [INFO][4810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.290 [INFO][4838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.290 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.294 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.309 [WARNING][4838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.309 [INFO][4838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.311 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:20.314174 containerd[1682]: 2025-01-29 12:01:20.312 [INFO][4810] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:20.314780 containerd[1682]: time="2025-01-29T12:01:20.314607922Z" level=info msg="TearDown network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\" successfully" Jan 29 12:01:20.314780 containerd[1682]: time="2025-01-29T12:01:20.314689882Z" level=info msg="StopPodSandbox for \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\" returns successfully" Jan 29 12:01:20.316499 containerd[1682]: time="2025-01-29T12:01:20.316459004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679c5f8dc4-tpdkk,Uid:5b402940-58cf-4f1a-a6f1-92b0dbe92e45,Namespace:calico-system,Attempt:1,}" Jan 29 12:01:20.317987 systemd[1]: run-netns-cni\x2df20d7f5b\x2dcd25\x2d2ffd\x2da47f\x2dd9040536b580.mount: Deactivated successfully. Jan 29 12:01:20.340912 containerd[1682]: time="2025-01-29T12:01:20.340447236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:20.340912 containerd[1682]: time="2025-01-29T12:01:20.340812756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:20.340912 containerd[1682]: time="2025-01-29T12:01:20.340826316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:20.341206 containerd[1682]: time="2025-01-29T12:01:20.341042756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:20.360813 systemd[1]: Started cri-containerd-0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e.scope - libcontainer container 0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e. Jan 29 12:01:20.392161 containerd[1682]: time="2025-01-29T12:01:20.391945023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s7b7m,Uid:c7fa59ba-a213-4b70-ae70-9ad0728fde63,Namespace:kube-system,Attempt:1,} returns sandbox id \"0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e\"" Jan 29 12:01:20.397381 containerd[1682]: time="2025-01-29T12:01:20.397260030Z" level=info msg="CreateContainer within sandbox \"0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:01:20.637125 containerd[1682]: time="2025-01-29T12:01:20.636862141Z" level=info msg="CreateContainer within sandbox \"2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a36c54f6f411262cb2fd75317b87b0483b90173c8b5941b74f397ea29850e180\"" Jan 29 12:01:20.638111 containerd[1682]: time="2025-01-29T12:01:20.638083503Z" level=info msg="StartContainer for \"a36c54f6f411262cb2fd75317b87b0483b90173c8b5941b74f397ea29850e180\"" Jan 29 12:01:20.669734 systemd[1]: Started cri-containerd-a36c54f6f411262cb2fd75317b87b0483b90173c8b5941b74f397ea29850e180.scope - libcontainer container a36c54f6f411262cb2fd75317b87b0483b90173c8b5941b74f397ea29850e180. Jan 29 12:01:20.670775 containerd[1682]: time="2025-01-29T12:01:20.669836464Z" level=info msg="CreateContainer within sandbox \"0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8d025949e01a120f64e602c5197eaa68410b84a543d69168d6eaccdfc631a5a\"" Jan 29 12:01:20.672174 containerd[1682]: time="2025-01-29T12:01:20.671148146Z" level=info msg="StartContainer for \"b8d025949e01a120f64e602c5197eaa68410b84a543d69168d6eaccdfc631a5a\"" Jan 29 12:01:20.739809 systemd[1]: Started cri-containerd-b8d025949e01a120f64e602c5197eaa68410b84a543d69168d6eaccdfc631a5a.scope - libcontainer container b8d025949e01a120f64e602c5197eaa68410b84a543d69168d6eaccdfc631a5a. Jan 29 12:01:20.743650 containerd[1682]: time="2025-01-29T12:01:20.743239000Z" level=info msg="StartContainer for \"a36c54f6f411262cb2fd75317b87b0483b90173c8b5941b74f397ea29850e180\" returns successfully" Jan 29 12:01:20.923885 systemd-networkd[1466]: calif2c3cd21d5d: Gained IPv6LL Jan 29 12:01:20.937392 containerd[1682]: time="2025-01-29T12:01:20.937257133Z" level=info msg="StartContainer for \"b8d025949e01a120f64e602c5197eaa68410b84a543d69168d6eaccdfc631a5a\" returns successfully" Jan 29 12:01:20.975048 systemd-networkd[1466]: cali47c8c9fe333: Link UP Jan 29 12:01:20.976373 systemd-networkd[1466]: cali47c8c9fe333: Gained carrier Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.682 [INFO][4891] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0 csi-node-driver- calico-system 4021a8b2-0983-46eb-bc49-b0d6f7f6429e 768 0 2025-01-29 12:00:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-ecab7ceadc csi-node-driver-6b7ld eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali47c8c9fe333 [] []}} ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.686 [INFO][4891] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.797 [INFO][4967] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" HandleID="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.854 [INFO][4967] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" HandleID="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-ecab7ceadc", "pod":"csi-node-driver-6b7ld", "timestamp":"2025-01-29 12:01:20.797403431 +0000 UTC"}, Hostname:"ci-4081.3.0-a-ecab7ceadc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.854 [INFO][4967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.854 [INFO][4967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.854 [INFO][4967] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-ecab7ceadc' Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.861 [INFO][4967] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.881 [INFO][4967] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.943 [INFO][4967] ipam/ipam.go 489: Trying affinity for 192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.946 [INFO][4967] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.950 [INFO][4967] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.950 [INFO][4967] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.952 [INFO][4967] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069 Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.958 [INFO][4967] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.968 [INFO][4967] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.131/26] block=192.168.64.128/26 handle="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.968 [INFO][4967] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.131/26] handle="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.968 [INFO][4967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:20.998175 containerd[1682]: 2025-01-29 12:01:20.968 [INFO][4967] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.131/26] IPv6=[] ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" HandleID="k8s-pod-network.8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.998872 containerd[1682]: 2025-01-29 12:01:20.971 [INFO][4891] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4021a8b2-0983-46eb-bc49-b0d6f7f6429e", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"", Pod:"csi-node-driver-6b7ld", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali47c8c9fe333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:20.998872 containerd[1682]: 2025-01-29 12:01:20.971 [INFO][4891] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.131/32] ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.998872 containerd[1682]: 2025-01-29 12:01:20.971 [INFO][4891] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47c8c9fe333 ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.998872 containerd[1682]: 2025-01-29 12:01:20.976 [INFO][4891] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:20.998872 containerd[1682]: 2025-01-29 12:01:20.977 [INFO][4891] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4021a8b2-0983-46eb-bc49-b0d6f7f6429e", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069", Pod:"csi-node-driver-6b7ld", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali47c8c9fe333", MAC:"36:56:1d:e0:5d:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:20.998872 containerd[1682]: 2025-01-29 12:01:20.995 [INFO][4891] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069" Namespace="calico-system" Pod="csi-node-driver-6b7ld" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:21.039999 systemd-networkd[1466]: calic7e907e3ba9: Link UP Jan 29 12:01:21.040931 systemd-networkd[1466]: calic7e907e3ba9: Gained carrier Jan 29 12:01:21.051553 containerd[1682]: time="2025-01-29T12:01:21.051333441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:21.052395 containerd[1682]: time="2025-01-29T12:01:21.052195282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:21.052395 containerd[1682]: time="2025-01-29T12:01:21.052257762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:21.052560 containerd[1682]: time="2025-01-29T12:01:21.052380043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.780 [INFO][4925] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0 calico-kube-controllers-679c5f8dc4- calico-system 5b402940-58cf-4f1a-a6f1-92b0dbe92e45 767 0 2025-01-29 12:00:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:679c5f8dc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-ecab7ceadc calico-kube-controllers-679c5f8dc4-tpdkk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7e907e3ba9 [] []}} ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.781 [INFO][4925] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.861 [INFO][4996] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" HandleID="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.886 [INFO][4996] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" HandleID="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d420), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-ecab7ceadc", "pod":"calico-kube-controllers-679c5f8dc4-tpdkk", "timestamp":"2025-01-29 12:01:20.861557274 +0000 UTC"}, Hostname:"ci-4081.3.0-a-ecab7ceadc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.935 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.968 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.968 [INFO][4996] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-ecab7ceadc' Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.972 [INFO][4996] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.983 [INFO][4996] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:20.999 [INFO][4996] ipam/ipam.go 489: Trying affinity for 192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.003 [INFO][4996] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.006 [INFO][4996] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.006 [INFO][4996] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.012 [INFO][4996] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.020 [INFO][4996] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.033 [INFO][4996] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.132/26] block=192.168.64.128/26 handle="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.033 [INFO][4996] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.132/26] handle="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.033 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:21.077057 containerd[1682]: 2025-01-29 12:01:21.033 [INFO][4996] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.132/26] IPv6=[] ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" HandleID="k8s-pod-network.367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:21.079458 containerd[1682]: 2025-01-29 12:01:21.036 [INFO][4925] cni-plugin/k8s.go 386: Populated endpoint ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0", GenerateName:"calico-kube-controllers-679c5f8dc4-", Namespace:"calico-system", SelfLink:"", UID:"5b402940-58cf-4f1a-a6f1-92b0dbe92e45", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"679c5f8dc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"", Pod:"calico-kube-controllers-679c5f8dc4-tpdkk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7e907e3ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:21.079458 containerd[1682]: 2025-01-29 12:01:21.036 [INFO][4925] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.132/32] ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:21.079458 containerd[1682]: 2025-01-29 12:01:21.036 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7e907e3ba9 ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:21.079458 containerd[1682]: 2025-01-29 12:01:21.042 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:21.079458 containerd[1682]: 2025-01-29 12:01:21.044 [INFO][4925] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0", GenerateName:"calico-kube-controllers-679c5f8dc4-", Namespace:"calico-system", SelfLink:"", UID:"5b402940-58cf-4f1a-a6f1-92b0dbe92e45", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"679c5f8dc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a", Pod:"calico-kube-controllers-679c5f8dc4-tpdkk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7e907e3ba9", MAC:"12:87:41:e4:58:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:21.079458 containerd[1682]: 2025-01-29 12:01:21.069 [INFO][4925] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a" Namespace="calico-system" Pod="calico-kube-controllers-679c5f8dc4-tpdkk" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:21.091853 systemd[1]: Started cri-containerd-8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069.scope - libcontainer container 8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069. Jan 29 12:01:21.126073 systemd-networkd[1466]: cali0f995d54188: Link UP Jan 29 12:01:21.127450 systemd-networkd[1466]: cali0f995d54188: Gained carrier Jan 29 12:01:21.132816 containerd[1682]: time="2025-01-29T12:01:21.131374505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:21.136095 containerd[1682]: time="2025-01-29T12:01:21.133092908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:21.140785 containerd[1682]: time="2025-01-29T12:01:21.136221592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:21.140785 containerd[1682]: time="2025-01-29T12:01:21.136338552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:20.850 [INFO][4943] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0 calico-apiserver-5b958f5bc6- calico-apiserver bfd5f24b-a59e-4991-be63-4ecca56f7298 766 0 2025-01-29 12:00:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b958f5bc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-ecab7ceadc calico-apiserver-5b958f5bc6-wzcct eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f995d54188 [] []}} ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:20.851 [INFO][4943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:20.918 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" HandleID="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:20.953 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" HandleID="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-ecab7ceadc", "pod":"calico-apiserver-5b958f5bc6-wzcct", "timestamp":"2025-01-29 12:01:20.918197348 +0000 UTC"}, Hostname:"ci-4081.3.0-a-ecab7ceadc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:20.953 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.033 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.033 [INFO][5014] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-ecab7ceadc' Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.038 [INFO][5014] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.046 [INFO][5014] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.059 [INFO][5014] ipam/ipam.go 489: Trying affinity for 192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.067 [INFO][5014] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.083 [INFO][5014] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.084 [INFO][5014] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.092 [INFO][5014] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.102 [INFO][5014] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.115 [INFO][5014] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.133/26] block=192.168.64.128/26 handle="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.115 [INFO][5014] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.133/26] handle="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.115 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:21.152224 containerd[1682]: 2025-01-29 12:01:21.116 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.133/26] IPv6=[] ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" HandleID="k8s-pod-network.064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:21.153459 containerd[1682]: 2025-01-29 12:01:21.122 [INFO][4943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfd5f24b-a59e-4991-be63-4ecca56f7298", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"", Pod:"calico-apiserver-5b958f5bc6-wzcct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f995d54188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:21.153459 containerd[1682]: 2025-01-29 12:01:21.122 [INFO][4943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.133/32] ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:21.153459 containerd[1682]: 2025-01-29 12:01:21.122 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f995d54188 ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:21.153459 containerd[1682]: 2025-01-29 12:01:21.128 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:21.153459 containerd[1682]: 2025-01-29 12:01:21.129 [INFO][4943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfd5f24b-a59e-4991-be63-4ecca56f7298", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e", Pod:"calico-apiserver-5b958f5bc6-wzcct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f995d54188", MAC:"1a:bb:d8:ba:f6:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:21.153459 containerd[1682]: 2025-01-29 12:01:21.146 [INFO][4943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-wzcct" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:21.175902 systemd[1]: Started cri-containerd-367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a.scope - libcontainer container 367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a. Jan 29 12:01:21.179315 containerd[1682]: time="2025-01-29T12:01:21.178996687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6b7ld,Uid:4021a8b2-0983-46eb-bc49-b0d6f7f6429e,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069\"" Jan 29 12:01:21.179792 systemd-networkd[1466]: cali669f46d71ce: Gained IPv6LL Jan 29 12:01:21.188318 containerd[1682]: time="2025-01-29T12:01:21.188201739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:01:21.209527 containerd[1682]: time="2025-01-29T12:01:21.209417607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:21.209781 containerd[1682]: time="2025-01-29T12:01:21.209497007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:21.209781 containerd[1682]: time="2025-01-29T12:01:21.209510007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:21.209781 containerd[1682]: time="2025-01-29T12:01:21.209602927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:21.230845 systemd[1]: Started cri-containerd-064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e.scope - libcontainer container 064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e. Jan 29 12:01:21.241224 containerd[1682]: time="2025-01-29T12:01:21.240963288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-679c5f8dc4-tpdkk,Uid:5b402940-58cf-4f1a-a6f1-92b0dbe92e45,Namespace:calico-system,Attempt:1,} returns sandbox id \"367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a\"" Jan 29 12:01:21.278147 containerd[1682]: time="2025-01-29T12:01:21.278087536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-wzcct,Uid:bfd5f24b-a59e-4991-be63-4ecca56f7298,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e\"" Jan 29 12:01:21.350152 kubelet[3290]: I0129 12:01:21.350073 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s7b7m" podStartSLOduration=32.35005483 podStartE2EDuration="32.35005483s" podCreationTimestamp="2025-01-29 12:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:21.331601566 +0000 UTC m=+47.336758893" watchObservedRunningTime="2025-01-29 12:01:21.35005483 +0000 UTC m=+47.355212157" Jan 29 12:01:21.357647 update_engine[1663]: I20250129 12:01:21.357054 1663 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:01:21.357647 update_engine[1663]: I20250129 12:01:21.357260 1663 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:01:21.357647 update_engine[1663]: I20250129 12:01:21.357455 1663 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:01:21.407820 update_engine[1663]: E20250129 12:01:21.407760 1663 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:01:21.407956 update_engine[1663]: I20250129 12:01:21.407855 1663 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 12:01:22.134631 containerd[1682]: time="2025-01-29T12:01:22.133569890Z" level=info msg="StopPodSandbox for \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\"" Jan 29 12:01:22.182333 kubelet[3290]: I0129 12:01:22.182051 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-krvv7" podStartSLOduration=33.182032153 podStartE2EDuration="33.182032153s" podCreationTimestamp="2025-01-29 12:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:21.351545152 +0000 UTC m=+47.356702479" watchObservedRunningTime="2025-01-29 12:01:22.182032153 +0000 UTC m=+48.187189480" Jan 29 12:01:22.204566 systemd-networkd[1466]: calic7e907e3ba9: Gained IPv6LL Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.182 [INFO][5204] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.182 [INFO][5204] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" iface="eth0" netns="/var/run/netns/cni-8e4ffa19-dbe0-003d-15a8-a0491c66b72c" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.183 [INFO][5204] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" iface="eth0" netns="/var/run/netns/cni-8e4ffa19-dbe0-003d-15a8-a0491c66b72c" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.184 [INFO][5204] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" iface="eth0" netns="/var/run/netns/cni-8e4ffa19-dbe0-003d-15a8-a0491c66b72c" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.184 [INFO][5204] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.184 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.207 [INFO][5210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.207 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.207 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.220 [WARNING][5210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.220 [INFO][5210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.223 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:22.227548 containerd[1682]: 2025-01-29 12:01:22.225 [INFO][5204] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:22.229925 containerd[1682]: time="2025-01-29T12:01:22.229834976Z" level=info msg="TearDown network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\" successfully" Jan 29 12:01:22.229925 containerd[1682]: time="2025-01-29T12:01:22.229884256Z" level=info msg="StopPodSandbox for \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\" returns successfully" Jan 29 12:01:22.230555 containerd[1682]: time="2025-01-29T12:01:22.230520616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-dd9vs,Uid:acc59e98-b69f-4ebb-bb77-4faa7d499339,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:01:22.231443 systemd[1]: run-netns-cni\x2d8e4ffa19\x2ddbe0\x2d003d\x2d15a8\x2da0491c66b72c.mount: Deactivated successfully. Jan 29 12:01:22.267859 systemd-networkd[1466]: cali0f995d54188: Gained IPv6LL Jan 29 12:01:22.438289 systemd-networkd[1466]: cali5f2a8223186: Link UP Jan 29 12:01:22.440449 systemd-networkd[1466]: cali5f2a8223186: Gained carrier Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.300 [INFO][5223] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0 calico-apiserver-5b958f5bc6- calico-apiserver acc59e98-b69f-4ebb-bb77-4faa7d499339 799 0 2025-01-29 12:00:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b958f5bc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-ecab7ceadc calico-apiserver-5b958f5bc6-dd9vs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f2a8223186 [] []}} ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.300 [INFO][5223] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.334 [INFO][5233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" HandleID="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.350 [INFO][5233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" HandleID="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000433370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-ecab7ceadc", "pod":"calico-apiserver-5b958f5bc6-dd9vs", "timestamp":"2025-01-29 12:01:22.334495512 +0000 UTC"}, Hostname:"ci-4081.3.0-a-ecab7ceadc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.350 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.350 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.350 [INFO][5233] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-ecab7ceadc' Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.356 [INFO][5233] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.375 [INFO][5233] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.390 [INFO][5233] ipam/ipam.go 489: Trying affinity for 192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.393 [INFO][5233] ipam/ipam.go 155: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.403 [INFO][5233] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.403 [INFO][5233] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.405 [INFO][5233] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995 Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.415 [INFO][5233] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.425 [INFO][5233] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.64.134/26] block=192.168.64.128/26 handle="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.425 [INFO][5233] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.64.134/26] handle="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" host="ci-4081.3.0-a-ecab7ceadc" Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.425 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:22.465151 containerd[1682]: 2025-01-29 12:01:22.425 [INFO][5233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.134/26] IPv6=[] ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" HandleID="k8s-pod-network.06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.465819 containerd[1682]: 2025-01-29 12:01:22.428 [INFO][5223] cni-plugin/k8s.go 386: Populated endpoint ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"acc59e98-b69f-4ebb-bb77-4faa7d499339", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"", Pod:"calico-apiserver-5b958f5bc6-dd9vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f2a8223186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:22.465819 containerd[1682]: 2025-01-29 12:01:22.428 [INFO][5223] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.64.134/32] ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.465819 containerd[1682]: 2025-01-29 12:01:22.428 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f2a8223186 ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.465819 containerd[1682]: 2025-01-29 12:01:22.441 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.465819 containerd[1682]: 2025-01-29 12:01:22.441 [INFO][5223] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"acc59e98-b69f-4ebb-bb77-4faa7d499339", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995", Pod:"calico-apiserver-5b958f5bc6-dd9vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f2a8223186", MAC:"0e:bd:96:8a:97:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:22.465819 containerd[1682]: 2025-01-29 12:01:22.459 [INFO][5223] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995" Namespace="calico-apiserver" Pod="calico-apiserver-5b958f5bc6-dd9vs" WorkloadEndpoint="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:22.498383 containerd[1682]: time="2025-01-29T12:01:22.498270605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:22.498383 containerd[1682]: time="2025-01-29T12:01:22.498336805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:22.498383 containerd[1682]: time="2025-01-29T12:01:22.498349045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:22.499578 containerd[1682]: time="2025-01-29T12:01:22.499249846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:22.538110 systemd[1]: Started cri-containerd-06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995.scope - libcontainer container 06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995. Jan 29 12:01:22.586217 containerd[1682]: time="2025-01-29T12:01:22.586175640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b958f5bc6-dd9vs,Uid:acc59e98-b69f-4ebb-bb77-4faa7d499339,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995\"" Jan 29 12:01:22.660664 containerd[1682]: time="2025-01-29T12:01:22.660341331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.663850 containerd[1682]: time="2025-01-29T12:01:22.663801609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 12:01:22.667861 containerd[1682]: time="2025-01-29T12:01:22.667787287Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.673886 containerd[1682]: time="2025-01-29T12:01:22.673809964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:22.674852 containerd[1682]: time="2025-01-29T12:01:22.674455484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.485993104s" Jan 29 12:01:22.674852 containerd[1682]: time="2025-01-29T12:01:22.674492244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 12:01:22.675898 containerd[1682]: time="2025-01-29T12:01:22.675606843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:01:22.677344 containerd[1682]: time="2025-01-29T12:01:22.677215162Z" level=info msg="CreateContainer within sandbox \"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:01:22.738699 containerd[1682]: time="2025-01-29T12:01:22.738551251Z" level=info msg="CreateContainer within sandbox \"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bcd44bba28900153b2d6a60a6aebea7955b4372fe55070a0e08b7bfcfa6fb7d4\"" Jan 29 12:01:22.740664 containerd[1682]: time="2025-01-29T12:01:22.740305210Z" level=info msg="StartContainer for \"bcd44bba28900153b2d6a60a6aebea7955b4372fe55070a0e08b7bfcfa6fb7d4\"" Jan 29 12:01:22.768824 systemd[1]: Started cri-containerd-bcd44bba28900153b2d6a60a6aebea7955b4372fe55070a0e08b7bfcfa6fb7d4.scope - libcontainer container bcd44bba28900153b2d6a60a6aebea7955b4372fe55070a0e08b7bfcfa6fb7d4. Jan 29 12:01:22.802584 containerd[1682]: time="2025-01-29T12:01:22.802531058Z" level=info msg="StartContainer for \"bcd44bba28900153b2d6a60a6aebea7955b4372fe55070a0e08b7bfcfa6fb7d4\" returns successfully" Jan 29 12:01:23.035827 systemd-networkd[1466]: cali47c8c9fe333: Gained IPv6LL Jan 29 12:01:24.380165 systemd-networkd[1466]: cali5f2a8223186: Gained IPv6LL Jan 29 12:01:25.053126 containerd[1682]: time="2025-01-29T12:01:25.053062417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:25.055453 containerd[1682]: time="2025-01-29T12:01:25.055382536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 12:01:25.059115 containerd[1682]: time="2025-01-29T12:01:25.059052494Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:25.064564 containerd[1682]: time="2025-01-29T12:01:25.064510411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:25.065697 containerd[1682]: time="2025-01-29T12:01:25.065296211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.389632688s" Jan 29 12:01:25.065697 containerd[1682]: time="2025-01-29T12:01:25.065342331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 12:01:25.067153 containerd[1682]: time="2025-01-29T12:01:25.067083490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:01:25.093550 containerd[1682]: time="2025-01-29T12:01:25.093368636Z" level=info msg="CreateContainer within sandbox \"367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:01:25.138199 containerd[1682]: time="2025-01-29T12:01:25.138079573Z" level=info msg="CreateContainer within sandbox \"367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"44f43d7906900fc7def12e72b22e4c00a9ab37a6654a0e27dd29d3caa4d68168\"" Jan 29 12:01:25.139933 containerd[1682]: time="2025-01-29T12:01:25.138777773Z" level=info msg="StartContainer for \"44f43d7906900fc7def12e72b22e4c00a9ab37a6654a0e27dd29d3caa4d68168\"" Jan 29 12:01:25.178050 systemd[1]: Started cri-containerd-44f43d7906900fc7def12e72b22e4c00a9ab37a6654a0e27dd29d3caa4d68168.scope - libcontainer container 44f43d7906900fc7def12e72b22e4c00a9ab37a6654a0e27dd29d3caa4d68168. Jan 29 12:01:25.215529 containerd[1682]: time="2025-01-29T12:01:25.215465133Z" level=info msg="StartContainer for \"44f43d7906900fc7def12e72b22e4c00a9ab37a6654a0e27dd29d3caa4d68168\" returns successfully" Jan 29 12:01:25.366704 kubelet[3290]: I0129 12:01:25.366377 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-679c5f8dc4-tpdkk" podStartSLOduration=26.542718296 podStartE2EDuration="30.366359975s" podCreationTimestamp="2025-01-29 12:00:55 +0000 UTC" firstStartedPulling="2025-01-29 12:01:21.242991851 +0000 UTC m=+47.248149178" lastFinishedPulling="2025-01-29 12:01:25.06663353 +0000 UTC m=+51.071790857" observedRunningTime="2025-01-29 12:01:25.366087536 +0000 UTC m=+51.371244863" watchObservedRunningTime="2025-01-29 12:01:25.366359975 +0000 UTC m=+51.371517302" Jan 29 12:01:28.243664 containerd[1682]: time="2025-01-29T12:01:28.243248598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:28.246802 containerd[1682]: time="2025-01-29T12:01:28.246747482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 12:01:28.250417 containerd[1682]: time="2025-01-29T12:01:28.250355567Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:28.256399 containerd[1682]: time="2025-01-29T12:01:28.256320974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:28.257359 containerd[1682]: time="2025-01-29T12:01:28.257194615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.190065485s" Jan 29 12:01:28.257359 containerd[1682]: time="2025-01-29T12:01:28.257238775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:01:28.258727 containerd[1682]: time="2025-01-29T12:01:28.258359056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:01:28.260580 containerd[1682]: time="2025-01-29T12:01:28.260402779Z" level=info msg="CreateContainer within sandbox \"064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:01:28.296875 containerd[1682]: time="2025-01-29T12:01:28.296781982Z" level=info msg="CreateContainer within sandbox \"064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"73f6b7510c6043ed289f78b570c3c49e46e4082419ddde359f8a881bd77963ef\"" Jan 29 12:01:28.297409 containerd[1682]: time="2025-01-29T12:01:28.297354903Z" level=info msg="StartContainer for \"73f6b7510c6043ed289f78b570c3c49e46e4082419ddde359f8a881bd77963ef\"" Jan 29 12:01:28.344844 systemd[1]: Started cri-containerd-73f6b7510c6043ed289f78b570c3c49e46e4082419ddde359f8a881bd77963ef.scope - libcontainer container 73f6b7510c6043ed289f78b570c3c49e46e4082419ddde359f8a881bd77963ef. Jan 29 12:01:28.381352 containerd[1682]: time="2025-01-29T12:01:28.381298803Z" level=info msg="StartContainer for \"73f6b7510c6043ed289f78b570c3c49e46e4082419ddde359f8a881bd77963ef\" returns successfully" Jan 29 12:01:28.841565 containerd[1682]: time="2025-01-29T12:01:28.841509751Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:28.844676 containerd[1682]: time="2025-01-29T12:01:28.844343155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:01:28.847977 containerd[1682]: time="2025-01-29T12:01:28.847854279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 589.447823ms" Jan 29 12:01:28.847977 containerd[1682]: time="2025-01-29T12:01:28.847898519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 12:01:28.849676 containerd[1682]: time="2025-01-29T12:01:28.848976480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:01:28.850798 containerd[1682]: time="2025-01-29T12:01:28.850772362Z" level=info msg="CreateContainer within sandbox \"06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:01:28.902081 containerd[1682]: time="2025-01-29T12:01:28.902025223Z" level=info msg="CreateContainer within sandbox \"06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d68605dffbb1fc2ba05f06f35bc7e33b36584a7225f7a40de263139ffacba0b4\"" Jan 29 12:01:28.905593 containerd[1682]: time="2025-01-29T12:01:28.904338906Z" level=info msg="StartContainer for \"d68605dffbb1fc2ba05f06f35bc7e33b36584a7225f7a40de263139ffacba0b4\"" Jan 29 12:01:28.933820 systemd[1]: Started cri-containerd-d68605dffbb1fc2ba05f06f35bc7e33b36584a7225f7a40de263139ffacba0b4.scope - libcontainer container d68605dffbb1fc2ba05f06f35bc7e33b36584a7225f7a40de263139ffacba0b4. Jan 29 12:01:28.983870 containerd[1682]: time="2025-01-29T12:01:28.983811681Z" level=info msg="StartContainer for \"d68605dffbb1fc2ba05f06f35bc7e33b36584a7225f7a40de263139ffacba0b4\" returns successfully" Jan 29 12:01:29.410353 kubelet[3290]: I0129 12:01:29.409975 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b958f5bc6-wzcct" podStartSLOduration=27.431275631 podStartE2EDuration="34.409953509s" podCreationTimestamp="2025-01-29 12:00:55 +0000 UTC" firstStartedPulling="2025-01-29 12:01:21.279521938 +0000 UTC m=+47.284679265" lastFinishedPulling="2025-01-29 12:01:28.258199816 +0000 UTC m=+54.263357143" observedRunningTime="2025-01-29 12:01:29.38547124 +0000 UTC m=+55.390628567" watchObservedRunningTime="2025-01-29 12:01:29.409953509 +0000 UTC m=+55.415110836" Jan 29 12:01:30.376663 kubelet[3290]: I0129 12:01:30.375861 3290 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:01:30.611921 kubelet[3290]: I0129 12:01:30.611795 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b958f5bc6-dd9vs" podStartSLOduration=29.351398344 podStartE2EDuration="35.611775261s" podCreationTimestamp="2025-01-29 12:00:55 +0000 UTC" firstStartedPulling="2025-01-29 12:01:22.588784363 +0000 UTC m=+48.593941650" lastFinishedPulling="2025-01-29 12:01:28.84916124 +0000 UTC m=+54.854318567" observedRunningTime="2025-01-29 12:01:29.412773312 +0000 UTC m=+55.417930639" watchObservedRunningTime="2025-01-29 12:01:30.611775261 +0000 UTC m=+56.616932588" Jan 29 12:01:31.184531 containerd[1682]: time="2025-01-29T12:01:31.184475164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:31.187874 containerd[1682]: time="2025-01-29T12:01:31.187824518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 12:01:31.191435 containerd[1682]: time="2025-01-29T12:01:31.191373792Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:31.197942 containerd[1682]: time="2025-01-29T12:01:31.197860941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:31.198705 containerd[1682]: time="2025-01-29T12:01:31.198540860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 2.34953358s" Jan 29 12:01:31.198705 containerd[1682]: time="2025-01-29T12:01:31.198579980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 12:01:31.203543 containerd[1682]: time="2025-01-29T12:01:31.203405852Z" level=info msg="CreateContainer within sandbox \"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:01:31.247870 containerd[1682]: time="2025-01-29T12:01:31.247821938Z" level=info msg="CreateContainer within sandbox \"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5e6273c50db1bf832ded64ec6d10694daa54e2648088c1e216f9c71906674578\"" Jan 29 12:01:31.248574 containerd[1682]: time="2025-01-29T12:01:31.248534537Z" level=info msg="StartContainer for \"5e6273c50db1bf832ded64ec6d10694daa54e2648088c1e216f9c71906674578\"" Jan 29 12:01:31.282830 systemd[1]: Started cri-containerd-5e6273c50db1bf832ded64ec6d10694daa54e2648088c1e216f9c71906674578.scope - libcontainer container 5e6273c50db1bf832ded64ec6d10694daa54e2648088c1e216f9c71906674578. Jan 29 12:01:31.315251 containerd[1682]: time="2025-01-29T12:01:31.314986626Z" level=info msg="StartContainer for \"5e6273c50db1bf832ded64ec6d10694daa54e2648088c1e216f9c71906674578\" returns successfully" Jan 29 12:01:31.357566 update_engine[1663]: I20250129 12:01:31.357029 1663 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:01:31.357566 update_engine[1663]: I20250129 12:01:31.357298 1663 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:01:31.357566 update_engine[1663]: I20250129 12:01:31.357524 1663 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:01:31.406090 update_engine[1663]: E20250129 12:01:31.405880 1663 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:01:31.406090 update_engine[1663]: I20250129 12:01:31.405972 1663 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 12:01:32.231736 kubelet[3290]: I0129 12:01:32.231695 3290 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:01:32.235044 kubelet[3290]: I0129 12:01:32.235019 3290 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:01:34.110043 containerd[1682]: time="2025-01-29T12:01:34.110007011Z" level=info msg="StopPodSandbox for \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\"" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.167 [WARNING][5548] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5cce7f93-5a7f-4e55-81c0-95de17e446e6", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26", Pod:"coredns-7db6d8ff4d-krvv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali669f46d71ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.167 [INFO][5548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.167 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" iface="eth0" netns="" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.167 [INFO][5548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.167 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.208 [INFO][5556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.209 [INFO][5556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.209 [INFO][5556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.217 [WARNING][5556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.217 [INFO][5556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.219 [INFO][5556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.222855 containerd[1682]: 2025-01-29 12:01:34.221 [INFO][5548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.224126 containerd[1682]: time="2025-01-29T12:01:34.222891073Z" level=info msg="TearDown network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\" successfully" Jan 29 12:01:34.224126 containerd[1682]: time="2025-01-29T12:01:34.222917393Z" level=info msg="StopPodSandbox for \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\" returns successfully" Jan 29 12:01:34.224707 containerd[1682]: time="2025-01-29T12:01:34.224474035Z" level=info msg="RemovePodSandbox for \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\"" Jan 29 12:01:34.224707 containerd[1682]: time="2025-01-29T12:01:34.224514195Z" level=info msg="Forcibly stopping sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\"" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.259 [WARNING][5574] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5cce7f93-5a7f-4e55-81c0-95de17e446e6", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"2ab6a7c3e82841805177929bf45d323cd35926421f1e7b9c2d8908a02e3c2e26", Pod:"coredns-7db6d8ff4d-krvv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali669f46d71ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.260 [INFO][5574] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.260 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" iface="eth0" netns="" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.260 [INFO][5574] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.260 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.281 [INFO][5580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.281 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.281 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.290 [WARNING][5580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.290 [INFO][5580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" HandleID="k8s-pod-network.fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--krvv7-eth0" Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.292 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.295411 containerd[1682]: 2025-01-29 12:01:34.294 [INFO][5574] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780" Jan 29 12:01:34.295863 containerd[1682]: time="2025-01-29T12:01:34.295456819Z" level=info msg="TearDown network for sandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\" successfully" Jan 29 12:01:34.304892 containerd[1682]: time="2025-01-29T12:01:34.304801027Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:01:34.305024 containerd[1682]: time="2025-01-29T12:01:34.304928707Z" level=info msg="RemovePodSandbox \"fe1c2c9519dfc62fa0eff9fab142483c7593f24dbe04c6d83dd33f3dddfb8780\" returns successfully" Jan 29 12:01:34.305634 containerd[1682]: time="2025-01-29T12:01:34.305583428Z" level=info msg="StopPodSandbox for \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\"" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.352 [WARNING][5598] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c7fa59ba-a213-4b70-ae70-9ad0728fde63", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e", Pod:"coredns-7db6d8ff4d-s7b7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c3cd21d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.353 [INFO][5598] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.353 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" iface="eth0" netns="" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.353 [INFO][5598] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.353 [INFO][5598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.377 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.378 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.378 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.387 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.387 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.389 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.392593 containerd[1682]: 2025-01-29 12:01:34.390 [INFO][5598] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.392593 containerd[1682]: time="2025-01-29T12:01:34.392474307Z" level=info msg="TearDown network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\" successfully" Jan 29 12:01:34.392593 containerd[1682]: time="2025-01-29T12:01:34.392498427Z" level=info msg="StopPodSandbox for \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\" returns successfully" Jan 29 12:01:34.394171 containerd[1682]: time="2025-01-29T12:01:34.393122867Z" level=info msg="RemovePodSandbox for \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\"" Jan 29 12:01:34.394171 containerd[1682]: time="2025-01-29T12:01:34.393155827Z" level=info msg="Forcibly stopping sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\"" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.435 [WARNING][5626] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c7fa59ba-a213-4b70-ae70-9ad0728fde63", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"0c6e0c2abc422c50c978219fdd66465fa9c246bda7638e5f72d4e14be7785d2e", Pod:"coredns-7db6d8ff4d-s7b7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2c3cd21d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.435 [INFO][5626] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.435 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" iface="eth0" netns="" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.435 [INFO][5626] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.435 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.455 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.455 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.455 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.464 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.464 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" HandleID="k8s-pod-network.d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-coredns--7db6d8ff4d--s7b7m-eth0" Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.465 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.468139 containerd[1682]: 2025-01-29 12:01:34.466 [INFO][5626] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605" Jan 29 12:01:34.468703 containerd[1682]: time="2025-01-29T12:01:34.468319255Z" level=info msg="TearDown network for sandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\" successfully" Jan 29 12:01:34.487984 containerd[1682]: time="2025-01-29T12:01:34.487918273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:01:34.488107 containerd[1682]: time="2025-01-29T12:01:34.488015833Z" level=info msg="RemovePodSandbox \"d9d65d7a2680f767110ceb98a6b3ba8494f3b111a4589b2c1b4613a231114605\" returns successfully" Jan 29 12:01:34.488712 containerd[1682]: time="2025-01-29T12:01:34.488593234Z" level=info msg="StopPodSandbox for \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\"" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.539 [WARNING][5650] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"acc59e98-b69f-4ebb-bb77-4faa7d499339", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995", Pod:"calico-apiserver-5b958f5bc6-dd9vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f2a8223186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.540 [INFO][5650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.540 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" iface="eth0" netns="" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.540 [INFO][5650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.540 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.563 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.564 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.564 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.572 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.572 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.574 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.577384 containerd[1682]: 2025-01-29 12:01:34.575 [INFO][5650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.577384 containerd[1682]: time="2025-01-29T12:01:34.577341994Z" level=info msg="TearDown network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\" successfully" Jan 29 12:01:34.577384 containerd[1682]: time="2025-01-29T12:01:34.577371514Z" level=info msg="StopPodSandbox for \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\" returns successfully" Jan 29 12:01:34.578405 containerd[1682]: time="2025-01-29T12:01:34.578313075Z" level=info msg="RemovePodSandbox for \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\"" Jan 29 12:01:34.578405 containerd[1682]: time="2025-01-29T12:01:34.578352715Z" level=info msg="Forcibly stopping sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\"" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.615 [WARNING][5674] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"acc59e98-b69f-4ebb-bb77-4faa7d499339", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"06ac6b3206c556024586157cefa2c7c13b221ae058c2c463992aaca335651995", Pod:"calico-apiserver-5b958f5bc6-dd9vs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f2a8223186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.615 [INFO][5674] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.615 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" iface="eth0" netns="" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.615 [INFO][5674] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.615 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.636 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.636 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.636 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.645 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.645 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" HandleID="k8s-pod-network.c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--dd9vs-eth0" Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.646 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.649586 containerd[1682]: 2025-01-29 12:01:34.648 [INFO][5674] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3" Jan 29 12:01:34.649586 containerd[1682]: time="2025-01-29T12:01:34.649484499Z" level=info msg="TearDown network for sandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\" successfully" Jan 29 12:01:34.657069 containerd[1682]: time="2025-01-29T12:01:34.657004906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:01:34.657291 containerd[1682]: time="2025-01-29T12:01:34.657096666Z" level=info msg="RemovePodSandbox \"c58d0348ff362ba3592c4af8c7872fd89fca549c67cb3f66b160013a2a3e64f3\" returns successfully" Jan 29 12:01:34.658143 containerd[1682]: time="2025-01-29T12:01:34.657865307Z" level=info msg="StopPodSandbox for \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\"" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.695 [WARNING][5698] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0", GenerateName:"calico-kube-controllers-679c5f8dc4-", Namespace:"calico-system", SelfLink:"", UID:"5b402940-58cf-4f1a-a6f1-92b0dbe92e45", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"679c5f8dc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a", Pod:"calico-kube-controllers-679c5f8dc4-tpdkk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7e907e3ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.695 [INFO][5698] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.695 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" iface="eth0" netns="" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.696 [INFO][5698] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.696 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.716 [INFO][5704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.716 [INFO][5704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.717 [INFO][5704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.725 [WARNING][5704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.725 [INFO][5704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.727 [INFO][5704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.730451 containerd[1682]: 2025-01-29 12:01:34.728 [INFO][5698] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.731542 containerd[1682]: time="2025-01-29T12:01:34.730892493Z" level=info msg="TearDown network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\" successfully" Jan 29 12:01:34.731542 containerd[1682]: time="2025-01-29T12:01:34.730922573Z" level=info msg="StopPodSandbox for \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\" returns successfully" Jan 29 12:01:34.731542 containerd[1682]: time="2025-01-29T12:01:34.731509733Z" level=info msg="RemovePodSandbox for \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\"" Jan 29 12:01:34.731542 containerd[1682]: time="2025-01-29T12:01:34.731541614Z" level=info msg="Forcibly stopping sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\"" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.768 [WARNING][5723] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0", GenerateName:"calico-kube-controllers-679c5f8dc4-", Namespace:"calico-system", SelfLink:"", UID:"5b402940-58cf-4f1a-a6f1-92b0dbe92e45", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"679c5f8dc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"367cf896c86f3148dbf30f2423ebb4ec613d01198a997c4daf2e8a996a644a9a", Pod:"calico-kube-controllers-679c5f8dc4-tpdkk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7e907e3ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.768 [INFO][5723] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.768 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" iface="eth0" netns="" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.768 [INFO][5723] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.768 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.789 [INFO][5729] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.790 [INFO][5729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.790 [INFO][5729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.798 [WARNING][5729] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.798 [INFO][5729] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" HandleID="k8s-pod-network.c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--kube--controllers--679c5f8dc4--tpdkk-eth0" Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.800 [INFO][5729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.803065 containerd[1682]: 2025-01-29 12:01:34.801 [INFO][5723] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92" Jan 29 12:01:34.803772 containerd[1682]: time="2025-01-29T12:01:34.803108838Z" level=info msg="TearDown network for sandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\" successfully" Jan 29 12:01:34.816952 containerd[1682]: time="2025-01-29T12:01:34.816890171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:01:34.817716 containerd[1682]: time="2025-01-29T12:01:34.816982491Z" level=info msg="RemovePodSandbox \"c8df182f9e24644f5b51a3c2a260d714965493b399d58c6017a0e68c218aaf92\" returns successfully" Jan 29 12:01:34.817716 containerd[1682]: time="2025-01-29T12:01:34.817491531Z" level=info msg="StopPodSandbox for \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\"" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.867 [WARNING][5747] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4021a8b2-0983-46eb-bc49-b0d6f7f6429e", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069", Pod:"csi-node-driver-6b7ld", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali47c8c9fe333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.867 [INFO][5747] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.867 [INFO][5747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" iface="eth0" netns="" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.867 [INFO][5747] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.867 [INFO][5747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.889 [INFO][5753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.890 [INFO][5753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.890 [INFO][5753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.898 [WARNING][5753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.898 [INFO][5753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.900 [INFO][5753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.903640 containerd[1682]: 2025-01-29 12:01:34.901 [INFO][5747] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.903640 containerd[1682]: time="2025-01-29T12:01:34.903356129Z" level=info msg="TearDown network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\" successfully" Jan 29 12:01:34.903640 containerd[1682]: time="2025-01-29T12:01:34.903382769Z" level=info msg="StopPodSandbox for \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\" returns successfully" Jan 29 12:01:34.905331 containerd[1682]: time="2025-01-29T12:01:34.904842570Z" level=info msg="RemovePodSandbox for \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\"" Jan 29 12:01:34.905331 containerd[1682]: time="2025-01-29T12:01:34.904890050Z" level=info msg="Forcibly stopping sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\"" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.945 [WARNING][5771] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4021a8b2-0983-46eb-bc49-b0d6f7f6429e", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"8c303bef30d21c2dd188f16bb69deffaa7db033809e396d3fbed0935fc441069", Pod:"csi-node-driver-6b7ld", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali47c8c9fe333", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.945 [INFO][5771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.945 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" iface="eth0" netns="" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.945 [INFO][5771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.945 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.967 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.967 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.967 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.976 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.976 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" HandleID="k8s-pod-network.a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-csi--node--driver--6b7ld-eth0" Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.977 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:34.981060 containerd[1682]: 2025-01-29 12:01:34.979 [INFO][5771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4" Jan 29 12:01:34.981555 containerd[1682]: time="2025-01-29T12:01:34.981066679Z" level=info msg="TearDown network for sandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\" successfully" Jan 29 12:01:34.993794 containerd[1682]: time="2025-01-29T12:01:34.993730011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:01:34.994044 containerd[1682]: time="2025-01-29T12:01:34.993826211Z" level=info msg="RemovePodSandbox \"a9b69044a78f78502261d5d99c5f9405143a61a6d29a3a950bc9062d7229d0b4\" returns successfully" Jan 29 12:01:34.994664 containerd[1682]: time="2025-01-29T12:01:34.994463171Z" level=info msg="StopPodSandbox for \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\"" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.034 [WARNING][5795] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfd5f24b-a59e-4991-be63-4ecca56f7298", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e", Pod:"calico-apiserver-5b958f5bc6-wzcct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f995d54188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.034 [INFO][5795] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.034 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" iface="eth0" netns="" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.034 [INFO][5795] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.034 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.054 [INFO][5802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.054 [INFO][5802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.054 [INFO][5802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.062 [WARNING][5802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.062 [INFO][5802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.064 [INFO][5802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:35.067498 containerd[1682]: 2025-01-29 12:01:35.065 [INFO][5795] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.068379 containerd[1682]: time="2025-01-29T12:01:35.067512798Z" level=info msg="TearDown network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\" successfully" Jan 29 12:01:35.068379 containerd[1682]: time="2025-01-29T12:01:35.067539478Z" level=info msg="StopPodSandbox for \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\" returns successfully" Jan 29 12:01:35.068671 containerd[1682]: time="2025-01-29T12:01:35.068631039Z" level=info msg="RemovePodSandbox for \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\"" Jan 29 12:01:35.068725 containerd[1682]: time="2025-01-29T12:01:35.068677959Z" level=info msg="Forcibly stopping sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\"" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.104 [WARNING][5820] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0", GenerateName:"calico-apiserver-5b958f5bc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bfd5f24b-a59e-4991-be63-4ecca56f7298", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 0, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b958f5bc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-ecab7ceadc", ContainerID:"064c8a3f05b7bc08bc7ae9f43bfbe216877abb7aae6fdd1805013e2b1364064e", Pod:"calico-apiserver-5b958f5bc6-wzcct", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f995d54188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.104 [INFO][5820] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.104 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" iface="eth0" netns="" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.104 [INFO][5820] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.104 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.126 [INFO][5827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.126 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.126 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.136 [WARNING][5827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.136 [INFO][5827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" HandleID="k8s-pod-network.e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Workload="ci--4081.3.0--a--ecab7ceadc-k8s-calico--apiserver--5b958f5bc6--wzcct-eth0" Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.137 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:01:35.140656 containerd[1682]: 2025-01-29 12:01:35.139 [INFO][5820] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5" Jan 29 12:01:35.141293 containerd[1682]: time="2025-01-29T12:01:35.140589224Z" level=info msg="TearDown network for sandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\" successfully" Jan 29 12:01:35.152059 containerd[1682]: time="2025-01-29T12:01:35.152000034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:01:35.152998 containerd[1682]: time="2025-01-29T12:01:35.152085154Z" level=info msg="RemovePodSandbox \"e267f134828b2bcb5023ffa655c4c93c169545758d178583e4ccce8e29abe7c5\" returns successfully" Jan 29 12:01:36.284070 kubelet[3290]: I0129 12:01:36.283853 3290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6b7ld" podStartSLOduration=31.271438657 podStartE2EDuration="41.283835978s" podCreationTimestamp="2025-01-29 12:00:55 +0000 UTC" firstStartedPulling="2025-01-29 12:01:21.186983618 +0000 UTC m=+47.192140945" lastFinishedPulling="2025-01-29 12:01:31.199380939 +0000 UTC m=+57.204538266" observedRunningTime="2025-01-29 12:01:31.394136534 +0000 UTC m=+57.399293861" watchObservedRunningTime="2025-01-29 12:01:36.283835978 +0000 UTC m=+62.288993305" Jan 29 12:01:41.368291 update_engine[1663]: I20250129 12:01:41.366672 1663 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:01:41.368291 update_engine[1663]: I20250129 12:01:41.366904 1663 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:01:41.368291 update_engine[1663]: I20250129 12:01:41.367123 1663 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:01:41.447113 update_engine[1663]: E20250129 12:01:41.447034 1663 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447135 1663 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447144 1663 omaha_request_action.cc:617] Omaha request response: Jan 29 12:01:41.447333 update_engine[1663]: E20250129 12:01:41.447242 1663 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447260 1663 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447266 1663 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447273 1663 update_attempter.cc:306] Processing Done. Jan 29 12:01:41.447333 update_engine[1663]: E20250129 12:01:41.447288 1663 update_attempter.cc:619] Update failed. Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447292 1663 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447297 1663 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 12:01:41.447333 update_engine[1663]: I20250129 12:01:41.447302 1663 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 12:01:41.447859 update_engine[1663]: I20250129 12:01:41.447376 1663 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:01:41.447859 update_engine[1663]: I20250129 12:01:41.447397 1663 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:01:41.447859 update_engine[1663]: I20250129 12:01:41.447402 1663 omaha_request_action.cc:272] Request: Jan 29 12:01:41.447859 update_engine[1663]: Jan 29 12:01:41.447859 update_engine[1663]: Jan 29 12:01:41.447859 update_engine[1663]: Jan 29 12:01:41.447859 update_engine[1663]: Jan 29 12:01:41.447859 update_engine[1663]: Jan 29 12:01:41.447859 update_engine[1663]: Jan 29 12:01:41.447859 update_engine[1663]: I20250129 12:01:41.447408 1663 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:01:41.447859 update_engine[1663]: I20250129 12:01:41.447573 1663 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:01:41.447859 update_engine[1663]: I20250129 12:01:41.447845 1663 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:01:41.448572 locksmithd[1712]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 12:01:41.495697 update_engine[1663]: E20250129 12:01:41.495592 1663 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:01:41.495851 update_engine[1663]: I20250129 12:01:41.495744 1663 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:01:41.495851 update_engine[1663]: I20250129 12:01:41.495755 1663 omaha_request_action.cc:617] Omaha request response: Jan 29 12:01:41.495851 update_engine[1663]: I20250129 12:01:41.495764 1663 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:01:41.495851 update_engine[1663]: I20250129 12:01:41.495770 1663 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:01:41.495851 update_engine[1663]: I20250129 12:01:41.495775 1663 update_attempter.cc:306] Processing Done. Jan 29 12:01:41.495851 update_engine[1663]: I20250129 12:01:41.495781 1663 update_attempter.cc:310] Error event sent. Jan 29 12:01:41.495851 update_engine[1663]: I20250129 12:01:41.495792 1663 update_check_scheduler.cc:74] Next update check in 45m32s Jan 29 12:01:41.496693 locksmithd[1712]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 12:01:52.275734 kubelet[3290]: I0129 12:01:52.274950 3290 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:31.675315 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:56608.service - OpenSSH per-connection server daemon (10.200.16.10:56608). Jan 29 12:03:32.112759 sshd[6099]: Accepted publickey for core from 10.200.16.10 port 56608 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:03:32.115218 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:32.121377 systemd-logind[1660]: New session 10 of user core. Jan 29 12:03:32.125981 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:03:32.498078 sshd[6099]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:32.502864 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:56608.service: Deactivated successfully. Jan 29 12:03:32.504459 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:03:32.506130 systemd-logind[1660]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:03:32.506956 systemd-logind[1660]: Removed session 10. Jan 29 12:03:36.225864 systemd[1]: run-containerd-runc-k8s.io-976d93c1f8364a5e241dd1f9195d1e22eea09e3ae54572f803791a0b1414ba73-runc.Ce8Pgu.mount: Deactivated successfully. Jan 29 12:03:37.576484 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:59634.service - OpenSSH per-connection server daemon (10.200.16.10:59634). Jan 29 12:03:38.001435 sshd[6159]: Accepted publickey for core from 10.200.16.10 port 59634 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:03:38.002816 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:38.007291 systemd-logind[1660]: New session 11 of user core. Jan 29 12:03:38.013179 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:03:38.384594 sshd[6159]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:38.387460 systemd-logind[1660]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:03:38.387697 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:59634.service: Deactivated successfully. Jan 29 12:03:38.389430 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:03:38.391649 systemd-logind[1660]: Removed session 11. Jan 29 12:03:43.466333 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:59650.service - OpenSSH per-connection server daemon (10.200.16.10:59650). Jan 29 12:03:43.895511 sshd[6176]: Accepted publickey for core from 10.200.16.10 port 59650 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:03:43.896916 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:43.901398 systemd-logind[1660]: New session 12 of user core. Jan 29 12:03:43.906760 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:03:44.274837 sshd[6176]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:44.278748 systemd-logind[1660]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:03:44.279373 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:59650.service: Deactivated successfully. Jan 29 12:03:44.282603 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:03:44.283670 systemd-logind[1660]: Removed session 12. Jan 29 12:03:44.356679 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:59666.service - OpenSSH per-connection server daemon (10.200.16.10:59666). Jan 29 12:03:44.801431 sshd[6189]: Accepted publickey for core from 10.200.16.10 port 59666 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:03:44.802951 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:44.807683 systemd-logind[1660]: New session 13 of user core. Jan 29 12:03:44.813928 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:03:45.227915 sshd[6189]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:45.231903 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:59666.service: Deactivated successfully. Jan 29 12:03:45.234180 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:03:45.235201 systemd-logind[1660]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:03:45.236174 systemd-logind[1660]: Removed session 13. Jan 29 12:03:45.312396 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:59668.service - OpenSSH per-connection server daemon (10.200.16.10:59668). Jan 29 12:03:45.746664 sshd[6200]: Accepted publickey for core from 10.200.16.10 port 59668 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:03:45.748007 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:45.751864 systemd-logind[1660]: New session 14 of user core. Jan 29 12:03:45.757750 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:03:46.127469 sshd[6200]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:46.131842 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:59668.service: Deactivated successfully. Jan 29 12:03:46.135711 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:03:46.137281 systemd-logind[1660]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:03:46.138263 systemd-logind[1660]: Removed session 14. Jan 29 12:03:51.211922 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:36238.service - OpenSSH per-connection server daemon (10.200.16.10:36238). Jan 29 12:03:51.633034 sshd[6219]: Accepted publickey for core from 10.200.16.10 port 36238 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:03:51.634445 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:51.639148 systemd-logind[1660]: New session 15 of user core. Jan 29 12:03:51.642878 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:03:52.019878 sshd[6219]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:52.023482 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:36238.service: Deactivated successfully. Jan 29 12:03:52.025968 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:03:52.026764 systemd-logind[1660]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:03:52.027678 systemd-logind[1660]: Removed session 15. Jan 29 12:03:57.105867 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:54752.service - OpenSSH per-connection server daemon (10.200.16.10:54752). Jan 29 12:03:57.546707 sshd[6232]: Accepted publickey for core from 10.200.16.10 port 54752 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:03:57.548039 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:57.552289 systemd-logind[1660]: New session 16 of user core. Jan 29 12:03:57.556780 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:03:57.946680 sshd[6232]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:57.949550 systemd-logind[1660]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:03:57.950126 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:54752.service: Deactivated successfully. Jan 29 12:03:57.952140 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:03:57.954404 systemd-logind[1660]: Removed session 16. Jan 29 12:04:03.033290 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:54762.service - OpenSSH per-connection server daemon (10.200.16.10:54762). Jan 29 12:04:03.477096 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 54762 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:03.478407 sshd[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:03.483052 systemd-logind[1660]: New session 17 of user core. Jan 29 12:04:03.489757 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:04:03.869661 sshd[6253]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:03.873095 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:54762.service: Deactivated successfully. Jan 29 12:04:03.875436 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:04:03.876186 systemd-logind[1660]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:04:03.877037 systemd-logind[1660]: Removed session 17. Jan 29 12:04:03.947271 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:54772.service - OpenSSH per-connection server daemon (10.200.16.10:54772). Jan 29 12:04:04.376174 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 54772 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:04.378025 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:04.384267 systemd-logind[1660]: New session 18 of user core. Jan 29 12:04:04.388851 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:04:04.842877 sshd[6265]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:04.846643 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:54772.service: Deactivated successfully. Jan 29 12:04:04.848706 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:04:04.850000 systemd-logind[1660]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:04:04.851114 systemd-logind[1660]: Removed session 18. Jan 29 12:04:04.930509 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:54774.service - OpenSSH per-connection server daemon (10.200.16.10:54774). Jan 29 12:04:05.364358 sshd[6277]: Accepted publickey for core from 10.200.16.10 port 54774 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:05.365785 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:05.369717 systemd-logind[1660]: New session 19 of user core. Jan 29 12:04:05.386790 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:04:06.238200 systemd[1]: run-containerd-runc-k8s.io-976d93c1f8364a5e241dd1f9195d1e22eea09e3ae54572f803791a0b1414ba73-runc.8DcvRl.mount: Deactivated successfully. Jan 29 12:04:07.289087 sshd[6277]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:07.291849 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:54774.service: Deactivated successfully. Jan 29 12:04:07.294014 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:04:07.295864 systemd-logind[1660]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:04:07.297141 systemd-logind[1660]: Removed session 19. Jan 29 12:04:07.365656 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:38550.service - OpenSSH per-connection server daemon (10.200.16.10:38550). Jan 29 12:04:07.803303 sshd[6318]: Accepted publickey for core from 10.200.16.10 port 38550 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:07.804589 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:07.808945 systemd-logind[1660]: New session 20 of user core. Jan 29 12:04:07.810779 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:04:08.292601 sshd[6318]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:08.296010 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:38550.service: Deactivated successfully. Jan 29 12:04:08.299511 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:04:08.301372 systemd-logind[1660]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:04:08.302513 systemd-logind[1660]: Removed session 20. Jan 29 12:04:08.373872 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:38554.service - OpenSSH per-connection server daemon (10.200.16.10:38554). Jan 29 12:04:08.798871 sshd[6348]: Accepted publickey for core from 10.200.16.10 port 38554 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:08.800091 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:08.804374 systemd-logind[1660]: New session 21 of user core. Jan 29 12:04:08.811797 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:04:09.208047 sshd[6348]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:09.214196 systemd-logind[1660]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:04:09.214378 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:38554.service: Deactivated successfully. Jan 29 12:04:09.218598 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:04:09.220220 systemd-logind[1660]: Removed session 21. Jan 29 12:04:14.300915 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:38560.service - OpenSSH per-connection server daemon (10.200.16.10:38560). Jan 29 12:04:14.735772 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 38560 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:14.737163 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:14.741697 systemd-logind[1660]: New session 22 of user core. Jan 29 12:04:14.746822 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:04:15.108173 sshd[6364]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:15.111790 systemd-logind[1660]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:04:15.111937 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:38560.service: Deactivated successfully. Jan 29 12:04:15.113573 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:04:15.114527 systemd-logind[1660]: Removed session 22. Jan 29 12:04:20.201084 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:44728.service - OpenSSH per-connection server daemon (10.200.16.10:44728). Jan 29 12:04:20.644295 sshd[6398]: Accepted publickey for core from 10.200.16.10 port 44728 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:20.645693 sshd[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:20.650210 systemd-logind[1660]: New session 23 of user core. Jan 29 12:04:20.656793 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:04:21.040414 sshd[6398]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:21.044721 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:44728.service: Deactivated successfully. Jan 29 12:04:21.047731 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:04:21.048766 systemd-logind[1660]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:04:21.049879 systemd-logind[1660]: Removed session 23. Jan 29 12:04:26.125959 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:36116.service - OpenSSH per-connection server daemon (10.200.16.10:36116). Jan 29 12:04:26.553069 sshd[6423]: Accepted publickey for core from 10.200.16.10 port 36116 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:26.554428 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:26.559310 systemd-logind[1660]: New session 24 of user core. Jan 29 12:04:26.562816 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:04:26.933207 sshd[6423]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:26.937131 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:36116.service: Deactivated successfully. Jan 29 12:04:26.939812 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:04:26.941466 systemd-logind[1660]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:04:26.943404 systemd-logind[1660]: Removed session 24. Jan 29 12:04:32.019364 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:36122.service - OpenSSH per-connection server daemon (10.200.16.10:36122). Jan 29 12:04:32.441329 sshd[6441]: Accepted publickey for core from 10.200.16.10 port 36122 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:32.442811 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:32.447286 systemd-logind[1660]: New session 25 of user core. Jan 29 12:04:32.451800 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:04:32.826041 sshd[6441]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:32.829302 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:36122.service: Deactivated successfully. Jan 29 12:04:32.831145 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:04:32.831993 systemd-logind[1660]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:04:32.833181 systemd-logind[1660]: Removed session 25. Jan 29 12:04:37.909890 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:48346.service - OpenSSH per-connection server daemon (10.200.16.10:48346). Jan 29 12:04:38.333191 sshd[6499]: Accepted publickey for core from 10.200.16.10 port 48346 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:38.334865 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:38.339219 systemd-logind[1660]: New session 26 of user core. Jan 29 12:04:38.342883 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:04:38.714824 sshd[6499]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:38.718885 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:48346.service: Deactivated successfully. Jan 29 12:04:38.721767 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:04:38.723127 systemd-logind[1660]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:04:38.724038 systemd-logind[1660]: Removed session 26. Jan 29 12:04:43.793727 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:48358.service - OpenSSH per-connection server daemon (10.200.16.10:48358). Jan 29 12:04:44.223061 sshd[6511]: Accepted publickey for core from 10.200.16.10 port 48358 ssh2: RSA SHA256:/rPiJgtjhsHQMpnhmsQVIHUYsykeZrTiixDf6Vkinow Jan 29 12:04:44.224960 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:04:44.228868 systemd-logind[1660]: New session 27 of user core. Jan 29 12:04:44.236809 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:04:44.605338 sshd[6511]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:44.608424 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:48358.service: Deactivated successfully. Jan 29 12:04:44.611180 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:04:44.613236 systemd-logind[1660]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:04:44.614193 systemd-logind[1660]: Removed session 27.