Jan 15 12:49:14.404707 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:49:14.404731 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:49:14.404740 kernel: KASLR enabled Jan 15 12:49:14.404746 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:49:14.404753 kernel: printk: bootconsole [pl11] enabled Jan 15 12:49:14.404759 kernel: efi: EFI v2.7 by EDK II Jan 15 12:49:14.404766 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:49:14.404773 kernel: random: crng init done Jan 15 12:49:14.404779 kernel: ACPI: Early table checksum verification disabled Jan 15 12:49:14.404785 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:49:14.404792 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404798 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404806 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:49:14.404812 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404820 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404826 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404833 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404841 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404848 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404854 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:49:14.404861 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404867 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:49:14.404874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:49:14.404881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:49:14.404887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:49:14.404894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:49:14.404900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:49:14.404907 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:49:14.404915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:49:14.404922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:49:14.404928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:49:14.404935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:49:14.404942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:49:14.404948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:49:14.404955 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 15 12:49:14.404961 kernel: Zone ranges: Jan 15 12:49:14.405098 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:49:14.405107 kernel: DMA32 empty Jan 15 12:49:14.405114 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:49:14.405120 kernel: Movable zone start for each node Jan 15 12:49:14.405132 kernel: Early memory node ranges Jan 15 12:49:14.405139 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:49:14.405146 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:49:14.405153 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:49:14.405160 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:49:14.405168 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:49:14.405175 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:49:14.405182 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:49:14.405189 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:49:14.405196 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:49:14.405203 kernel: psci: probing for conduit method from ACPI. Jan 15 12:49:14.405210 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:49:14.405217 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:49:14.405224 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:49:14.405230 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:49:14.405237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:49:14.405244 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:49:14.405252 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:49:14.405259 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:49:14.405266 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:49:14.405273 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:49:14.405280 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:49:14.405287 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:49:14.405294 kernel: CPU features: detected: Spectre-BHB Jan 15 12:49:14.405301 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:49:14.405308 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:49:14.405315 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:49:14.405322 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:49:14.405330 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:49:14.405338 kernel: alternatives: applying boot alternatives Jan 15 12:49:14.405346 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:49:14.405354 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:49:14.405361 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:49:14.405368 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:49:14.405374 kernel: Fallback order for Node 0: 0 Jan 15 12:49:14.405381 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:49:14.405388 kernel: Policy zone: Normal Jan 15 12:49:14.405395 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:49:14.405402 kernel: software IO TLB: area num 2. Jan 15 12:49:14.405410 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:49:14.405418 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 15 12:49:14.405425 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:49:14.405431 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:49:14.405439 kernel: rcu: RCU event tracing is enabled. Jan 15 12:49:14.405446 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:49:14.405453 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:49:14.405460 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:49:14.405467 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:49:14.405474 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:49:14.405481 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:49:14.405489 kernel: GICv3: 960 SPIs implemented Jan 15 12:49:14.405496 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:49:14.405503 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:49:14.405510 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:49:14.405517 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:49:14.405524 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:49:14.405531 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:49:14.405538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:49:14.405545 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:49:14.405552 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:49:14.405559 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:49:14.405567 kernel: Console: colour dummy device 80x25 Jan 15 12:49:14.405575 kernel: printk: console [tty1] enabled Jan 15 12:49:14.405582 kernel: ACPI: Core revision 20230628 Jan 15 12:49:14.405589 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:49:14.405596 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:49:14.405603 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:49:14.405610 kernel: landlock: Up and running. Jan 15 12:49:14.405618 kernel: SELinux: Initializing. Jan 15 12:49:14.405625 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.405633 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.405642 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:49:14.405649 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:49:14.405656 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:49:14.405664 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:49:14.405671 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:49:14.405678 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:49:14.405685 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:49:14.405699 kernel: Remapping and enabling EFI services. Jan 15 12:49:14.405706 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:49:14.405714 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:49:14.405721 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:49:14.405730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:49:14.405737 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:49:14.405745 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:49:14.405752 kernel: SMP: Total of 2 processors activated. Jan 15 12:49:14.405759 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:49:14.405769 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:49:14.405776 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:49:14.405784 kernel: CPU features: detected: CRC32 instructions Jan 15 12:49:14.405791 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:49:14.405798 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:49:14.405806 kernel: CPU features: detected: Privileged Access Never Jan 15 12:49:14.405813 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:49:14.405820 kernel: alternatives: applying system-wide alternatives Jan 15 12:49:14.405828 kernel: devtmpfs: initialized Jan 15 12:49:14.405837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:49:14.405844 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:49:14.405852 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:49:14.405859 kernel: SMBIOS 3.1.0 present. Jan 15 12:49:14.405867 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:49:14.405874 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:49:14.405882 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:49:14.405889 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:49:14.405897 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:49:14.405906 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:49:14.405914 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:49:14.405921 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:49:14.405928 kernel: cpuidle: using governor menu Jan 15 12:49:14.405936 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:49:14.405943 kernel: ASID allocator initialised with 32768 entries Jan 15 12:49:14.405951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:49:14.405958 kernel: Serial: AMBA PL011 UART driver Jan 15 12:49:14.405965 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:49:14.405984 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:49:14.405991 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:49:14.405999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:49:14.406007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:49:14.406014 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:49:14.406022 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:49:14.406029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:49:14.406036 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:49:14.406044 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:49:14.406053 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:49:14.406060 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:49:14.406068 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:49:14.406075 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:49:14.406082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:49:14.406090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:49:14.406097 kernel: ACPI: Interpreter enabled Jan 15 12:49:14.406104 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:49:14.406112 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:49:14.406120 kernel: printk: console [ttyAMA0] enabled Jan 15 12:49:14.406128 kernel: printk: bootconsole [pl11] disabled Jan 15 12:49:14.406136 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:49:14.406143 kernel: iommu: Default domain type: Translated Jan 15 12:49:14.406150 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:49:14.406158 kernel: efivars: Registered efivars operations Jan 15 12:49:14.406165 kernel: vgaarb: loaded Jan 15 12:49:14.406172 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:49:14.406180 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:49:14.406189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:49:14.406196 kernel: pnp: PnP ACPI init Jan 15 12:49:14.406204 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:49:14.406211 kernel: NET: Registered PF_INET protocol family Jan 15 12:49:14.406219 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:49:14.406227 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:49:14.406234 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:49:14.406242 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:49:14.406249 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:49:14.406258 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:49:14.406266 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.406273 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.406281 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:49:14.406288 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:49:14.406295 kernel: kvm [1]: HYP mode not available Jan 15 12:49:14.406303 kernel: Initialise system trusted keyrings Jan 15 12:49:14.406310 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:49:14.406318 kernel: Key type asymmetric registered Jan 15 12:49:14.406327 kernel: Asymmetric key parser 'x509' registered Jan 15 12:49:14.406334 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:49:14.406342 kernel: io scheduler mq-deadline registered Jan 15 12:49:14.406349 kernel: io scheduler kyber registered Jan 15 12:49:14.406357 kernel: io scheduler bfq registered Jan 15 12:49:14.406364 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:49:14.406372 kernel: thunder_xcv, ver 1.0 Jan 15 12:49:14.406379 kernel: thunder_bgx, ver 1.0 Jan 15 12:49:14.406386 kernel: nicpf, ver 1.0 Jan 15 12:49:14.406394 kernel: nicvf, ver 1.0 Jan 15 12:49:14.406534 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:49:14.406615 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:49:13 UTC (1736945353) Jan 15 12:49:14.406625 kernel: efifb: probing for efifb Jan 15 12:49:14.406634 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:49:14.406641 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:49:14.406649 kernel: efifb: scrolling: redraw Jan 15 12:49:14.406656 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:49:14.406666 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:49:14.406674 kernel: fb0: EFI VGA frame buffer device Jan 15 12:49:14.406681 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:49:14.406689 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:49:14.406696 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:49:14.406704 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:49:14.406711 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:49:14.406719 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:49:14.406726 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:49:14.406735 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:49:14.406742 kernel: Segment Routing with IPv6 Jan 15 12:49:14.406750 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:49:14.406757 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:49:14.406765 kernel: Key type dns_resolver registered Jan 15 12:49:14.406772 kernel: registered taskstats version 1 Jan 15 12:49:14.406779 kernel: Loading compiled-in X.509 certificates Jan 15 12:49:14.406787 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:49:14.406795 kernel: Key type .fscrypt registered Jan 15 12:49:14.406804 kernel: Key type fscrypt-provisioning registered Jan 15 12:49:14.406811 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:49:14.406819 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:49:14.406827 kernel: ima: No architecture policies found Jan 15 12:49:14.406834 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:49:14.406842 kernel: clk: Disabling unused clocks Jan 15 12:49:14.406849 kernel: Freeing unused kernel memory: 39360K Jan 15 12:49:14.406857 kernel: Run /init as init process Jan 15 12:49:14.406864 kernel: with arguments: Jan 15 12:49:14.406873 kernel: /init Jan 15 12:49:14.406880 kernel: with environment: Jan 15 12:49:14.406887 kernel: HOME=/ Jan 15 12:49:14.406895 kernel: TERM=linux Jan 15 12:49:14.406902 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:49:14.406912 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:49:14.406921 systemd[1]: Detected virtualization microsoft. Jan 15 12:49:14.406929 systemd[1]: Detected architecture arm64. Jan 15 12:49:14.406938 systemd[1]: Running in initrd. Jan 15 12:49:14.406946 systemd[1]: No hostname configured, using default hostname. Jan 15 12:49:14.406954 systemd[1]: Hostname set to . Jan 15 12:49:14.406962 systemd[1]: Initializing machine ID from random generator. Jan 15 12:49:14.411347 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:49:14.411368 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:49:14.411377 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:49:14.411387 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:49:14.411403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:49:14.411412 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:49:14.411420 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:49:14.411430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:49:14.411439 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:49:14.411447 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:49:14.411457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:49:14.411465 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:49:14.411474 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:49:14.411482 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:49:14.411490 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:49:14.411498 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:49:14.411507 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:49:14.411515 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:49:14.411524 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:49:14.411534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:49:14.411543 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:49:14.411551 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:49:14.411560 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:49:14.411568 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:49:14.411576 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:49:14.411585 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:49:14.411593 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:49:14.411601 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:49:14.411612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:49:14.411655 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:49:14.411677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:14.411686 systemd-journald[217]: Journal started Jan 15 12:49:14.411708 systemd-journald[217]: Runtime Journal (/run/log/journal/a983d112e103439fb604e893777d34c0) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:49:14.412378 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:49:14.439596 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:49:14.440282 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:49:14.466270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:49:14.466317 kernel: Bridge firewalling registered Jan 15 12:49:14.469887 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:49:14.470577 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:49:14.485185 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:49:14.496583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:49:14.508142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:14.533275 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:49:14.549160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:49:14.561684 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:49:14.586153 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:49:14.602222 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:14.611958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:49:14.626438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:49:14.639483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:49:14.665293 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:49:14.673185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:49:14.689164 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:49:14.717832 dracut-cmdline[250]: dracut-dracut-053 Jan 15 12:49:14.731255 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:49:14.723326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:49:14.736714 systemd-resolved[251]: Positive Trust Anchors: Jan 15 12:49:14.736723 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:49:14.736755 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:49:14.740123 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 15 12:49:14.743232 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:49:14.772750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:49:14.910006 kernel: SCSI subsystem initialized Jan 15 12:49:14.919013 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:49:14.930003 kernel: iscsi: registered transport (tcp) Jan 15 12:49:14.948315 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:49:14.948338 kernel: QLogic iSCSI HBA Driver Jan 15 12:49:14.983568 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:49:14.999252 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:49:15.028790 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:49:15.028838 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:49:15.035985 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:49:15.084997 kernel: raid6: neonx8 gen() 15758 MB/s Jan 15 12:49:15.107987 kernel: raid6: neonx4 gen() 15651 MB/s Jan 15 12:49:15.127979 kernel: raid6: neonx2 gen() 13233 MB/s Jan 15 12:49:15.147979 kernel: raid6: neonx1 gen() 10489 MB/s Jan 15 12:49:15.168979 kernel: raid6: int64x8 gen() 6962 MB/s Jan 15 12:49:15.188978 kernel: raid6: int64x4 gen() 7352 MB/s Jan 15 12:49:15.208978 kernel: raid6: int64x2 gen() 6134 MB/s Jan 15 12:49:15.233401 kernel: raid6: int64x1 gen() 5061 MB/s Jan 15 12:49:15.233414 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Jan 15 12:49:15.258505 kernel: raid6: .... xor() 11933 MB/s, rmw enabled Jan 15 12:49:15.258528 kernel: raid6: using neon recovery algorithm Jan 15 12:49:15.270724 kernel: xor: measuring software checksum speed Jan 15 12:49:15.270739 kernel: 8regs : 19769 MB/sec Jan 15 12:49:15.278416 kernel: 32regs : 18705 MB/sec Jan 15 12:49:15.278429 kernel: arm64_neon : 26972 MB/sec Jan 15 12:49:15.283195 kernel: xor: using function: arm64_neon (26972 MB/sec) Jan 15 12:49:15.334997 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:49:15.345062 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:49:15.361178 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:49:15.383588 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 15 12:49:15.390394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:49:15.410214 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:49:15.428072 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Jan 15 12:49:15.455262 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:49:15.474305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:49:15.513261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:49:15.532476 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:49:15.557146 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:49:15.572601 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:49:15.587814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:49:15.602035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:49:15.623265 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:49:15.637888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:49:15.638069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:15.682781 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:49:15.682816 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:49:15.654982 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:49:15.713277 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:49:15.713301 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:49:15.728424 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:49:15.676864 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:15.749517 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:49:15.749540 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:49:15.677107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:15.792100 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:49:15.792127 kernel: scsi host0: storvsc_host_t Jan 15 12:49:15.792308 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:49:15.792320 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:49:15.792343 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:49:15.699079 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:15.809092 kernel: scsi host1: storvsc_host_t Jan 15 12:49:15.734789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:15.777414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:49:15.810781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:15.810879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:15.858251 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:49:15.858409 kernel: PTP clock support registered Jan 15 12:49:15.857706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:15.893270 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:49:15.893298 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:49:15.883795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:16.270013 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:49:16.270046 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:49:16.270059 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: VF slot 1 added Jan 15 12:49:16.271047 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:49:16.269834 systemd-resolved[251]: Clock change detected. Flushing caches. Jan 15 12:49:16.273266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:49:16.312170 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 12:49:16.385723 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:49:16.385746 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:49:16.385757 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:49:16.385893 kernel: hv_pci 295fc9d2-6fa3-4bdb-8bb4-bb1d61c2348b: PCI VMBus probing: Using version 0x10004 Jan 15 12:49:16.442007 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:49:16.442154 kernel: hv_pci 295fc9d2-6fa3-4bdb-8bb4-bb1d61c2348b: PCI host bridge to bus 6fa3:00 Jan 15 12:49:16.442250 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 12:49:16.442339 kernel: pci_bus 6fa3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:49:16.442445 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:49:16.442534 kernel: pci_bus 6fa3:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:49:16.442623 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:49:16.442784 kernel: pci 6fa3:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:49:16.442902 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:16.442914 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:49:16.443038 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 12:49:16.443128 kernel: pci 6fa3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:49:16.443219 kernel: pci 6fa3:00:02.0: enabling Extended Tags Jan 15 12:49:16.443306 kernel: pci 6fa3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6fa3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:49:16.443388 kernel: pci_bus 6fa3:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:49:16.443467 kernel: pci 6fa3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:49:16.350623 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:16.492723 kernel: mlx5_core 6fa3:00:02.0: enabling device (0000 -> 0002) Jan 15 12:49:16.713340 kernel: mlx5_core 6fa3:00:02.0: firmware version: 16.30.1284 Jan 15 12:49:16.713478 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: VF registering: eth1 Jan 15 12:49:16.713578 kernel: mlx5_core 6fa3:00:02.0 eth1: joined to eth0 Jan 15 12:49:16.713673 kernel: mlx5_core 6fa3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:49:16.723981 kernel: mlx5_core 6fa3:00:02.0 enP28579s1: renamed from eth1 Jan 15 12:49:16.871100 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:49:16.990277 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (495) Jan 15 12:49:16.998966 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (506) Jan 15 12:49:17.005447 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:49:17.014251 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:49:17.027246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:49:17.063286 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:49:17.076734 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:49:17.105967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:17.115957 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:18.126924 disk-uuid[608]: The operation has completed successfully. Jan 15 12:49:18.134600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:18.196384 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:49:18.196492 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:49:18.222167 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:49:18.238142 sh[694]: Success Jan 15 12:49:18.269985 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:49:18.470063 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:49:18.494058 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:49:18.504862 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:49:18.543950 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:49:18.544012 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:18.556952 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:49:18.556993 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:49:18.561913 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:49:18.892433 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:49:18.898546 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:49:18.921235 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:49:18.930155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:49:18.971780 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:18.971837 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:18.977168 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:49:19.001229 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:49:19.008826 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:49:19.023204 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:19.032358 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:49:19.060556 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:49:19.068453 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:49:19.085236 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:49:19.097257 systemd-networkd[877]: lo: Link UP Jan 15 12:49:19.097261 systemd-networkd[877]: lo: Gained carrier Jan 15 12:49:19.098900 systemd-networkd[877]: Enumeration completed Jan 15 12:49:19.099292 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:49:19.111617 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:19.111621 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:19.112094 systemd[1]: Reached target network.target - Network. Jan 15 12:49:19.211953 kernel: mlx5_core 6fa3:00:02.0 enP28579s1: Link up Jan 15 12:49:19.252093 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: Data path switched to VF: enP28579s1 Jan 15 12:49:19.252454 systemd-networkd[877]: enP28579s1: Link UP Jan 15 12:49:19.252538 systemd-networkd[877]: eth0: Link UP Jan 15 12:49:19.252658 systemd-networkd[877]: eth0: Gained carrier Jan 15 12:49:19.252667 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:19.276679 systemd-networkd[877]: enP28579s1: Gained carrier Jan 15 12:49:19.290981 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:21.112197 systemd-networkd[877]: enP28579s1: Gained IPv6LL Jan 15 12:49:21.240038 systemd-networkd[877]: eth0: Gained IPv6LL Jan 15 12:49:22.249499 ignition[878]: Ignition 2.19.0 Jan 15 12:49:22.249513 ignition[878]: Stage: fetch-offline Jan 15 12:49:22.249550 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.268965 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:49:22.249558 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.249659 ignition[878]: parsed url from cmdline: "" Jan 15 12:49:22.249663 ignition[878]: no config URL provided Jan 15 12:49:22.249668 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:49:22.249677 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:49:22.249682 ignition[878]: failed to fetch config: resource requires networking Jan 15 12:49:22.309211 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:49:22.249905 ignition[878]: Ignition finished successfully Jan 15 12:49:22.340214 ignition[887]: Ignition 2.19.0 Jan 15 12:49:22.340220 ignition[887]: Stage: fetch Jan 15 12:49:22.340419 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.340432 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.340545 ignition[887]: parsed url from cmdline: "" Jan 15 12:49:22.340548 ignition[887]: no config URL provided Jan 15 12:49:22.340553 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:49:22.340560 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:49:22.340586 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:49:22.456082 ignition[887]: GET result: OK Jan 15 12:49:22.456150 ignition[887]: config has been read from IMDS userdata Jan 15 12:49:22.459774 unknown[887]: fetched base config from "system" Jan 15 12:49:22.456193 ignition[887]: parsing config with SHA512: c981a25d0c33fb0c9610170461785c1925a101e0d0c426922def9a5a8626b8a57878c064429ef178979e2c426f5cda149908bdc70fb287081181de24835cd1d2 Jan 15 12:49:22.459784 unknown[887]: fetched base config from "system" Jan 15 12:49:22.460127 ignition[887]: fetch: fetch complete Jan 15 12:49:22.459789 unknown[887]: fetched user config from "azure" Jan 15 12:49:22.460131 ignition[887]: fetch: fetch passed Jan 15 12:49:22.465903 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:49:22.460172 ignition[887]: Ignition finished successfully Jan 15 12:49:22.494235 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:49:22.521120 ignition[894]: Ignition 2.19.0 Jan 15 12:49:22.521126 ignition[894]: Stage: kargs Jan 15 12:49:22.530806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:49:22.521304 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.521313 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.522213 ignition[894]: kargs: kargs passed Jan 15 12:49:22.522262 ignition[894]: Ignition finished successfully Jan 15 12:49:22.567106 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:49:22.590444 ignition[901]: Ignition 2.19.0 Jan 15 12:49:22.590459 ignition[901]: Stage: disks Jan 15 12:49:22.590667 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.596832 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:49:22.590677 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.607092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:49:22.595344 ignition[901]: disks: disks passed Jan 15 12:49:22.620405 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:49:22.595397 ignition[901]: Ignition finished successfully Jan 15 12:49:22.635186 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:49:22.649889 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:49:22.665386 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:49:22.704189 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:49:22.780505 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:49:22.791523 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:49:22.812140 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:49:22.875993 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:49:22.876285 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:49:22.882368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:49:22.933017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:49:22.955054 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:49:22.969769 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (921) Jan 15 12:49:22.969794 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:22.984078 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:22.990058 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:49:23.000627 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:49:22.999156 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:49:23.007183 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:49:23.007222 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:49:23.026682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:49:23.049962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:49:23.084208 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:49:23.591694 coreos-metadata[923]: Jan 15 12:49:23.591 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:49:23.601385 coreos-metadata[923]: Jan 15 12:49:23.601 INFO Fetch successful Jan 15 12:49:23.601385 coreos-metadata[923]: Jan 15 12:49:23.601 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:49:23.621511 coreos-metadata[923]: Jan 15 12:49:23.621 INFO Fetch successful Jan 15 12:49:23.638982 coreos-metadata[923]: Jan 15 12:49:23.638 INFO wrote hostname ci-4081.3.0-a-c63c213d7c to /sysroot/etc/hostname Jan 15 12:49:23.651123 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:49:23.893063 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:49:23.929116 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:49:23.953801 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:49:23.964456 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:49:25.006647 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:49:25.025080 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:49:25.055866 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:25.035118 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:49:25.057791 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:49:25.082424 ignition[1039]: INFO : Ignition 2.19.0 Jan 15 12:49:25.082424 ignition[1039]: INFO : Stage: mount Jan 15 12:49:25.091950 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:25.091950 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:25.091950 ignition[1039]: INFO : mount: mount passed Jan 15 12:49:25.091950 ignition[1039]: INFO : Ignition finished successfully Jan 15 12:49:25.086984 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:49:25.099114 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:49:25.132128 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:49:25.146181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:49:25.190955 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1052) Jan 15 12:49:25.190994 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:25.198225 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:25.203513 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:49:25.210964 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:49:25.213155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:49:25.247004 ignition[1070]: INFO : Ignition 2.19.0 Jan 15 12:49:25.247004 ignition[1070]: INFO : Stage: files Jan 15 12:49:25.256484 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:25.256484 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:25.256484 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:49:25.279359 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:49:25.279359 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:49:25.315881 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:49:25.325547 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:49:25.325547 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:49:25.321466 unknown[1070]: wrote ssh authorized keys file for user: core Jan 15 12:49:25.349377 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:49:25.349377 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:49:25.395987 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 12:49:25.494509 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 15 12:49:25.948513 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 12:49:26.141848 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:26.141848 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 12:49:26.189743 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: files passed Jan 15 12:49:26.203286 ignition[1070]: INFO : Ignition finished successfully Jan 15 12:49:26.203477 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:49:26.241750 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:49:26.260141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:49:26.341093 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:49:26.341093 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:49:26.291604 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:49:26.374614 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:49:26.291712 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:49:26.318961 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:49:26.333520 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:49:26.375197 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:49:26.423768 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:49:26.423895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:49:26.439628 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:49:26.452856 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:49:26.468056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:49:26.482248 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:49:26.518008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:49:26.538181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:49:26.557537 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:49:26.557636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:49:26.573259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:49:26.588039 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:49:26.603614 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:49:26.616601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:49:26.616696 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:49:26.635509 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:49:26.649832 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:49:26.662011 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:49:26.676921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:49:26.691327 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:49:26.705600 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:49:26.719787 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:49:26.733790 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:49:26.749309 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:49:26.762229 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:49:26.773676 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:49:26.773750 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:49:26.791645 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:49:26.804476 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:49:26.818113 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:49:26.818166 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:49:26.835169 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:49:26.835237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:49:26.856094 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:49:26.856150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:49:26.869270 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:49:26.869317 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:49:26.881996 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:49:26.882047 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:49:26.970868 ignition[1122]: INFO : Ignition 2.19.0 Jan 15 12:49:26.970868 ignition[1122]: INFO : Stage: umount Jan 15 12:49:26.970868 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:26.970868 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:26.970868 ignition[1122]: INFO : umount: umount passed Jan 15 12:49:26.970868 ignition[1122]: INFO : Ignition finished successfully Jan 15 12:49:26.911121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:49:26.926376 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:49:26.941599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:49:26.941690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:49:26.963512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:49:26.963580 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:49:26.975881 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:49:26.980669 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:49:26.993997 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:49:26.994059 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:49:27.000868 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:49:27.000929 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:49:27.021555 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:49:27.021620 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:49:27.033534 systemd[1]: Stopped target network.target - Network. Jan 15 12:49:27.039152 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:49:27.039216 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:49:27.053603 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:49:27.068392 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:49:27.073972 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:49:27.083394 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:49:27.098420 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:49:27.117733 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:49:27.117785 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:49:27.129914 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:49:27.129971 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:49:27.143533 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:49:27.143590 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:49:27.155965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:49:27.156013 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:49:27.168727 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:49:27.180875 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:49:27.194448 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:49:27.206992 systemd-networkd[877]: eth0: DHCPv6 lease lost Jan 15 12:49:27.213169 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:49:27.213328 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:49:27.233291 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:49:27.233405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:49:27.247828 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:49:27.482741 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: Data path switched from VF: enP28579s1 Jan 15 12:49:27.247891 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:49:27.288073 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:49:27.300140 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:49:27.300215 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:49:27.313535 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:49:27.313587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:49:27.326858 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:49:27.326906 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:49:27.339322 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:49:27.339374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:49:27.353020 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:49:27.387300 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:49:27.388458 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:49:27.401375 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:49:27.401447 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:49:27.412653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:49:27.412694 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:49:27.427414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:49:27.427476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:49:27.456696 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:49:27.456761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:49:27.469553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:49:27.469600 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:27.496169 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:49:27.512008 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:49:27.512076 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:49:27.528684 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 12:49:27.528744 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:49:27.542498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:49:27.542548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:49:27.556677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:27.556775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:27.570722 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:49:27.570824 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:49:27.592237 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:49:27.819459 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 15 12:49:27.592374 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:49:27.650335 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:49:27.650487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:49:27.661337 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:49:27.674132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:49:27.674194 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:49:27.718084 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:49:27.739994 systemd[1]: Switching root. Jan 15 12:49:27.866521 systemd-journald[217]: Journal stopped Jan 15 12:49:14.404707 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 15 12:49:14.404731 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 15 12:49:14.404740 kernel: KASLR enabled Jan 15 12:49:14.404746 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 15 12:49:14.404753 kernel: printk: bootconsole [pl11] enabled Jan 15 12:49:14.404759 kernel: efi: EFI v2.7 by EDK II Jan 15 12:49:14.404766 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 15 12:49:14.404773 kernel: random: crng init done Jan 15 12:49:14.404779 kernel: ACPI: Early table checksum verification disabled Jan 15 12:49:14.404785 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 15 12:49:14.404792 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404798 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404806 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 15 12:49:14.404812 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404820 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404826 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404833 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404841 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404848 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404854 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 15 12:49:14.404861 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 15 12:49:14.404867 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 15 12:49:14.404874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 15 12:49:14.404881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 15 12:49:14.404887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 15 12:49:14.404894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 15 12:49:14.404900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 15 12:49:14.404907 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 15 12:49:14.404915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 15 12:49:14.404922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 15 12:49:14.404928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 15 12:49:14.404935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 15 12:49:14.404942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 15 12:49:14.404948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 15 12:49:14.404955 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 15 12:49:14.404961 kernel: Zone ranges: Jan 15 12:49:14.405098 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 15 12:49:14.405107 kernel: DMA32 empty Jan 15 12:49:14.405114 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:49:14.405120 kernel: Movable zone start for each node Jan 15 12:49:14.405132 kernel: Early memory node ranges Jan 15 12:49:14.405139 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 15 12:49:14.405146 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 15 12:49:14.405153 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 15 12:49:14.405160 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 15 12:49:14.405168 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 15 12:49:14.405175 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 15 12:49:14.405182 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 15 12:49:14.405189 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 15 12:49:14.405196 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 15 12:49:14.405203 kernel: psci: probing for conduit method from ACPI. Jan 15 12:49:14.405210 kernel: psci: PSCIv1.1 detected in firmware. Jan 15 12:49:14.405217 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 12:49:14.405224 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 15 12:49:14.405230 kernel: psci: SMC Calling Convention v1.4 Jan 15 12:49:14.405237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 15 12:49:14.405244 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 15 12:49:14.405252 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 15 12:49:14.405259 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 15 12:49:14.405266 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 12:49:14.405273 kernel: Detected PIPT I-cache on CPU0 Jan 15 12:49:14.405280 kernel: CPU features: detected: GIC system register CPU interface Jan 15 12:49:14.405287 kernel: CPU features: detected: Hardware dirty bit management Jan 15 12:49:14.405294 kernel: CPU features: detected: Spectre-BHB Jan 15 12:49:14.405301 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 15 12:49:14.405308 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 15 12:49:14.405315 kernel: CPU features: detected: ARM erratum 1418040 Jan 15 12:49:14.405322 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 15 12:49:14.405330 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 15 12:49:14.405338 kernel: alternatives: applying boot alternatives Jan 15 12:49:14.405346 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:49:14.405354 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 15 12:49:14.405361 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 12:49:14.405368 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 12:49:14.405374 kernel: Fallback order for Node 0: 0 Jan 15 12:49:14.405381 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 15 12:49:14.405388 kernel: Policy zone: Normal Jan 15 12:49:14.405395 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 12:49:14.405402 kernel: software IO TLB: area num 2. Jan 15 12:49:14.405410 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 15 12:49:14.405418 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 15 12:49:14.405425 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 12:49:14.405431 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 12:49:14.405439 kernel: rcu: RCU event tracing is enabled. Jan 15 12:49:14.405446 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 12:49:14.405453 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 12:49:14.405460 kernel: Tracing variant of Tasks RCU enabled. Jan 15 12:49:14.405467 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 12:49:14.405474 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 12:49:14.405481 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 12:49:14.405489 kernel: GICv3: 960 SPIs implemented Jan 15 12:49:14.405496 kernel: GICv3: 0 Extended SPIs implemented Jan 15 12:49:14.405503 kernel: Root IRQ handler: gic_handle_irq Jan 15 12:49:14.405510 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 15 12:49:14.405517 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 15 12:49:14.405524 kernel: ITS: No ITS available, not enabling LPIs Jan 15 12:49:14.405531 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 12:49:14.405538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:49:14.405545 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 15 12:49:14.405552 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 15 12:49:14.405559 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 15 12:49:14.405567 kernel: Console: colour dummy device 80x25 Jan 15 12:49:14.405575 kernel: printk: console [tty1] enabled Jan 15 12:49:14.405582 kernel: ACPI: Core revision 20230628 Jan 15 12:49:14.405589 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 15 12:49:14.405596 kernel: pid_max: default: 32768 minimum: 301 Jan 15 12:49:14.405603 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 15 12:49:14.405610 kernel: landlock: Up and running. Jan 15 12:49:14.405618 kernel: SELinux: Initializing. Jan 15 12:49:14.405625 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.405633 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.405642 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:49:14.405649 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 12:49:14.405656 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 15 12:49:14.405664 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 15 12:49:14.405671 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 15 12:49:14.405678 kernel: rcu: Hierarchical SRCU implementation. Jan 15 12:49:14.405685 kernel: rcu: Max phase no-delay instances is 400. Jan 15 12:49:14.405699 kernel: Remapping and enabling EFI services. Jan 15 12:49:14.405706 kernel: smp: Bringing up secondary CPUs ... Jan 15 12:49:14.405714 kernel: Detected PIPT I-cache on CPU1 Jan 15 12:49:14.405721 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 15 12:49:14.405730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 15 12:49:14.405737 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 15 12:49:14.405745 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 12:49:14.405752 kernel: SMP: Total of 2 processors activated. Jan 15 12:49:14.405759 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 12:49:14.405769 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 15 12:49:14.405776 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 15 12:49:14.405784 kernel: CPU features: detected: CRC32 instructions Jan 15 12:49:14.405791 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 15 12:49:14.405798 kernel: CPU features: detected: LSE atomic instructions Jan 15 12:49:14.405806 kernel: CPU features: detected: Privileged Access Never Jan 15 12:49:14.405813 kernel: CPU: All CPU(s) started at EL1 Jan 15 12:49:14.405820 kernel: alternatives: applying system-wide alternatives Jan 15 12:49:14.405828 kernel: devtmpfs: initialized Jan 15 12:49:14.405837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 12:49:14.405844 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 12:49:14.405852 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 12:49:14.405859 kernel: SMBIOS 3.1.0 present. Jan 15 12:49:14.405867 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 15 12:49:14.405874 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 12:49:14.405882 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 12:49:14.405889 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 12:49:14.405897 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 12:49:14.405906 kernel: audit: initializing netlink subsys (disabled) Jan 15 12:49:14.405914 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 15 12:49:14.405921 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 12:49:14.405928 kernel: cpuidle: using governor menu Jan 15 12:49:14.405936 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 12:49:14.405943 kernel: ASID allocator initialised with 32768 entries Jan 15 12:49:14.405951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 12:49:14.405958 kernel: Serial: AMBA PL011 UART driver Jan 15 12:49:14.405965 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 15 12:49:14.405984 kernel: Modules: 0 pages in range for non-PLT usage Jan 15 12:49:14.405991 kernel: Modules: 509040 pages in range for PLT usage Jan 15 12:49:14.405999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 12:49:14.406007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 12:49:14.406014 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 12:49:14.406022 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 12:49:14.406029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 12:49:14.406036 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 12:49:14.406044 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 12:49:14.406053 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 12:49:14.406060 kernel: ACPI: Added _OSI(Module Device) Jan 15 12:49:14.406068 kernel: ACPI: Added _OSI(Processor Device) Jan 15 12:49:14.406075 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 15 12:49:14.406082 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 12:49:14.406090 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 12:49:14.406097 kernel: ACPI: Interpreter enabled Jan 15 12:49:14.406104 kernel: ACPI: Using GIC for interrupt routing Jan 15 12:49:14.406112 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 15 12:49:14.406120 kernel: printk: console [ttyAMA0] enabled Jan 15 12:49:14.406128 kernel: printk: bootconsole [pl11] disabled Jan 15 12:49:14.406136 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 15 12:49:14.406143 kernel: iommu: Default domain type: Translated Jan 15 12:49:14.406150 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 12:49:14.406158 kernel: efivars: Registered efivars operations Jan 15 12:49:14.406165 kernel: vgaarb: loaded Jan 15 12:49:14.406172 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 12:49:14.406180 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 12:49:14.406189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 12:49:14.406196 kernel: pnp: PnP ACPI init Jan 15 12:49:14.406204 kernel: pnp: PnP ACPI: found 0 devices Jan 15 12:49:14.406211 kernel: NET: Registered PF_INET protocol family Jan 15 12:49:14.406219 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 12:49:14.406227 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 12:49:14.406234 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 12:49:14.406242 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 12:49:14.406249 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 12:49:14.406258 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 12:49:14.406266 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.406273 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 12:49:14.406281 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 12:49:14.406288 kernel: PCI: CLS 0 bytes, default 64 Jan 15 12:49:14.406295 kernel: kvm [1]: HYP mode not available Jan 15 12:49:14.406303 kernel: Initialise system trusted keyrings Jan 15 12:49:14.406310 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 12:49:14.406318 kernel: Key type asymmetric registered Jan 15 12:49:14.406327 kernel: Asymmetric key parser 'x509' registered Jan 15 12:49:14.406334 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 12:49:14.406342 kernel: io scheduler mq-deadline registered Jan 15 12:49:14.406349 kernel: io scheduler kyber registered Jan 15 12:49:14.406357 kernel: io scheduler bfq registered Jan 15 12:49:14.406364 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 12:49:14.406372 kernel: thunder_xcv, ver 1.0 Jan 15 12:49:14.406379 kernel: thunder_bgx, ver 1.0 Jan 15 12:49:14.406386 kernel: nicpf, ver 1.0 Jan 15 12:49:14.406394 kernel: nicvf, ver 1.0 Jan 15 12:49:14.406534 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 12:49:14.406615 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-15T12:49:13 UTC (1736945353) Jan 15 12:49:14.406625 kernel: efifb: probing for efifb Jan 15 12:49:14.406634 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 15 12:49:14.406641 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 15 12:49:14.406649 kernel: efifb: scrolling: redraw Jan 15 12:49:14.406656 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 12:49:14.406666 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:49:14.406674 kernel: fb0: EFI VGA frame buffer device Jan 15 12:49:14.406681 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 15 12:49:14.406689 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 12:49:14.406696 kernel: No ACPI PMU IRQ for CPU0 Jan 15 12:49:14.406704 kernel: No ACPI PMU IRQ for CPU1 Jan 15 12:49:14.406711 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 15 12:49:14.406719 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 15 12:49:14.406726 kernel: watchdog: Hard watchdog permanently disabled Jan 15 12:49:14.406735 kernel: NET: Registered PF_INET6 protocol family Jan 15 12:49:14.406742 kernel: Segment Routing with IPv6 Jan 15 12:49:14.406750 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 12:49:14.406757 kernel: NET: Registered PF_PACKET protocol family Jan 15 12:49:14.406765 kernel: Key type dns_resolver registered Jan 15 12:49:14.406772 kernel: registered taskstats version 1 Jan 15 12:49:14.406779 kernel: Loading compiled-in X.509 certificates Jan 15 12:49:14.406787 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 15 12:49:14.406795 kernel: Key type .fscrypt registered Jan 15 12:49:14.406804 kernel: Key type fscrypt-provisioning registered Jan 15 12:49:14.406811 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 12:49:14.406819 kernel: ima: Allocated hash algorithm: sha1 Jan 15 12:49:14.406827 kernel: ima: No architecture policies found Jan 15 12:49:14.406834 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 12:49:14.406842 kernel: clk: Disabling unused clocks Jan 15 12:49:14.406849 kernel: Freeing unused kernel memory: 39360K Jan 15 12:49:14.406857 kernel: Run /init as init process Jan 15 12:49:14.406864 kernel: with arguments: Jan 15 12:49:14.406873 kernel: /init Jan 15 12:49:14.406880 kernel: with environment: Jan 15 12:49:14.406887 kernel: HOME=/ Jan 15 12:49:14.406895 kernel: TERM=linux Jan 15 12:49:14.406902 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 15 12:49:14.406912 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:49:14.406921 systemd[1]: Detected virtualization microsoft. Jan 15 12:49:14.406929 systemd[1]: Detected architecture arm64. Jan 15 12:49:14.406938 systemd[1]: Running in initrd. Jan 15 12:49:14.406946 systemd[1]: No hostname configured, using default hostname. Jan 15 12:49:14.406954 systemd[1]: Hostname set to . Jan 15 12:49:14.406962 systemd[1]: Initializing machine ID from random generator. Jan 15 12:49:14.411347 systemd[1]: Queued start job for default target initrd.target. Jan 15 12:49:14.411368 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:49:14.411377 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:49:14.411387 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 12:49:14.411403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:49:14.411412 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 12:49:14.411420 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 12:49:14.411430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 12:49:14.411439 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 12:49:14.411447 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:49:14.411457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:49:14.411465 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:49:14.411474 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:49:14.411482 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:49:14.411490 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:49:14.411498 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:49:14.411507 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:49:14.411515 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 12:49:14.411524 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 15 12:49:14.411534 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:49:14.411543 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:49:14.411551 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:49:14.411560 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:49:14.411568 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 12:49:14.411576 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:49:14.411585 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 12:49:14.411593 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 12:49:14.411601 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:49:14.411612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:49:14.411655 systemd-journald[217]: Collecting audit messages is disabled. Jan 15 12:49:14.411677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:14.411686 systemd-journald[217]: Journal started Jan 15 12:49:14.411708 systemd-journald[217]: Runtime Journal (/run/log/journal/a983d112e103439fb604e893777d34c0) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:49:14.412378 systemd-modules-load[218]: Inserted module 'overlay' Jan 15 12:49:14.439596 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:49:14.440282 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 12:49:14.466270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 12:49:14.466317 kernel: Bridge firewalling registered Jan 15 12:49:14.469887 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:49:14.470577 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 15 12:49:14.485185 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 12:49:14.496583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:49:14.508142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:14.533275 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:49:14.549160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:49:14.561684 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:49:14.586153 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:49:14.602222 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:14.611958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:49:14.626438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:49:14.639483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:49:14.665293 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 12:49:14.673185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:49:14.689164 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:49:14.717832 dracut-cmdline[250]: dracut-dracut-053 Jan 15 12:49:14.731255 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 15 12:49:14.723326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:49:14.736714 systemd-resolved[251]: Positive Trust Anchors: Jan 15 12:49:14.736723 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:49:14.736755 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:49:14.740123 systemd-resolved[251]: Defaulting to hostname 'linux'. Jan 15 12:49:14.743232 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:49:14.772750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:49:14.910006 kernel: SCSI subsystem initialized Jan 15 12:49:14.919013 kernel: Loading iSCSI transport class v2.0-870. Jan 15 12:49:14.930003 kernel: iscsi: registered transport (tcp) Jan 15 12:49:14.948315 kernel: iscsi: registered transport (qla4xxx) Jan 15 12:49:14.948338 kernel: QLogic iSCSI HBA Driver Jan 15 12:49:14.983568 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 12:49:14.999252 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 12:49:15.028790 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 12:49:15.028838 kernel: device-mapper: uevent: version 1.0.3 Jan 15 12:49:15.035985 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 15 12:49:15.084997 kernel: raid6: neonx8 gen() 15758 MB/s Jan 15 12:49:15.107987 kernel: raid6: neonx4 gen() 15651 MB/s Jan 15 12:49:15.127979 kernel: raid6: neonx2 gen() 13233 MB/s Jan 15 12:49:15.147979 kernel: raid6: neonx1 gen() 10489 MB/s Jan 15 12:49:15.168979 kernel: raid6: int64x8 gen() 6962 MB/s Jan 15 12:49:15.188978 kernel: raid6: int64x4 gen() 7352 MB/s Jan 15 12:49:15.208978 kernel: raid6: int64x2 gen() 6134 MB/s Jan 15 12:49:15.233401 kernel: raid6: int64x1 gen() 5061 MB/s Jan 15 12:49:15.233414 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Jan 15 12:49:15.258505 kernel: raid6: .... xor() 11933 MB/s, rmw enabled Jan 15 12:49:15.258528 kernel: raid6: using neon recovery algorithm Jan 15 12:49:15.270724 kernel: xor: measuring software checksum speed Jan 15 12:49:15.270739 kernel: 8regs : 19769 MB/sec Jan 15 12:49:15.278416 kernel: 32regs : 18705 MB/sec Jan 15 12:49:15.278429 kernel: arm64_neon : 26972 MB/sec Jan 15 12:49:15.283195 kernel: xor: using function: arm64_neon (26972 MB/sec) Jan 15 12:49:15.334997 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 12:49:15.345062 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:49:15.361178 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:49:15.383588 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 15 12:49:15.390394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:49:15.410214 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 12:49:15.428072 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Jan 15 12:49:15.455262 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:49:15.474305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:49:15.513261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:49:15.532476 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 12:49:15.557146 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 12:49:15.572601 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:49:15.587814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:49:15.602035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:49:15.623265 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 12:49:15.637888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:49:15.638069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:15.682781 kernel: hv_vmbus: Vmbus version:5.3 Jan 15 12:49:15.682816 kernel: hv_vmbus: registering driver hid_hyperv Jan 15 12:49:15.654982 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:49:15.713277 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 15 12:49:15.713301 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 15 12:49:15.728424 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 15 12:49:15.676864 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:15.749517 kernel: hv_vmbus: registering driver hv_netvsc Jan 15 12:49:15.749540 kernel: hv_vmbus: registering driver hv_storvsc Jan 15 12:49:15.677107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:15.792100 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 15 12:49:15.792127 kernel: scsi host0: storvsc_host_t Jan 15 12:49:15.792308 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 15 12:49:15.792320 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 15 12:49:15.792343 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 15 12:49:15.699079 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:15.809092 kernel: scsi host1: storvsc_host_t Jan 15 12:49:15.734789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:15.777414 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:49:15.810781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:15.810879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:15.858251 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 15 12:49:15.858409 kernel: PTP clock support registered Jan 15 12:49:15.857706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:15.893270 kernel: hv_utils: Registering HyperV Utility Driver Jan 15 12:49:15.893298 kernel: hv_vmbus: registering driver hv_utils Jan 15 12:49:15.883795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:16.270013 kernel: hv_utils: Shutdown IC version 3.2 Jan 15 12:49:16.270046 kernel: hv_utils: TimeSync IC version 4.0 Jan 15 12:49:16.270059 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: VF slot 1 added Jan 15 12:49:16.271047 kernel: hv_utils: Heartbeat IC version 3.0 Jan 15 12:49:16.269834 systemd-resolved[251]: Clock change detected. Flushing caches. Jan 15 12:49:16.273266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 12:49:16.312170 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 15 12:49:16.385723 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 12:49:16.385746 kernel: hv_vmbus: registering driver hv_pci Jan 15 12:49:16.385757 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 15 12:49:16.385893 kernel: hv_pci 295fc9d2-6fa3-4bdb-8bb4-bb1d61c2348b: PCI VMBus probing: Using version 0x10004 Jan 15 12:49:16.442007 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 15 12:49:16.442154 kernel: hv_pci 295fc9d2-6fa3-4bdb-8bb4-bb1d61c2348b: PCI host bridge to bus 6fa3:00 Jan 15 12:49:16.442250 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 15 12:49:16.442339 kernel: pci_bus 6fa3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 15 12:49:16.442445 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 15 12:49:16.442534 kernel: pci_bus 6fa3:00: No busn resource found for root bus, will use [bus 00-ff] Jan 15 12:49:16.442623 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 15 12:49:16.442784 kernel: pci 6fa3:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 15 12:49:16.442902 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:16.442914 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 15 12:49:16.443038 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 15 12:49:16.443128 kernel: pci 6fa3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:49:16.443219 kernel: pci 6fa3:00:02.0: enabling Extended Tags Jan 15 12:49:16.443306 kernel: pci 6fa3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6fa3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 15 12:49:16.443388 kernel: pci_bus 6fa3:00: busn_res: [bus 00-ff] end is updated to 00 Jan 15 12:49:16.443467 kernel: pci 6fa3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 15 12:49:16.350623 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:16.492723 kernel: mlx5_core 6fa3:00:02.0: enabling device (0000 -> 0002) Jan 15 12:49:16.713340 kernel: mlx5_core 6fa3:00:02.0: firmware version: 16.30.1284 Jan 15 12:49:16.713478 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: VF registering: eth1 Jan 15 12:49:16.713578 kernel: mlx5_core 6fa3:00:02.0 eth1: joined to eth0 Jan 15 12:49:16.713673 kernel: mlx5_core 6fa3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 15 12:49:16.723981 kernel: mlx5_core 6fa3:00:02.0 enP28579s1: renamed from eth1 Jan 15 12:49:16.871100 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 15 12:49:16.990277 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (495) Jan 15 12:49:16.998966 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (506) Jan 15 12:49:17.005447 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 15 12:49:17.014251 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 15 12:49:17.027246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:49:17.063286 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 12:49:17.076734 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 15 12:49:17.105967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:17.115957 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:18.126924 disk-uuid[608]: The operation has completed successfully. Jan 15 12:49:18.134600 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 15 12:49:18.196384 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 12:49:18.196492 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 12:49:18.222167 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 12:49:18.238142 sh[694]: Success Jan 15 12:49:18.269985 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 15 12:49:18.470063 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 12:49:18.494058 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 12:49:18.504862 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 12:49:18.543950 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 15 12:49:18.544012 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:18.556952 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 15 12:49:18.556993 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 12:49:18.561913 kernel: BTRFS info (device dm-0): using free space tree Jan 15 12:49:18.892433 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 12:49:18.898546 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 12:49:18.921235 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 12:49:18.930155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 12:49:18.971780 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:18.971837 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:18.977168 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:49:19.001229 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:49:19.008826 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 15 12:49:19.023204 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:19.032358 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:49:19.060556 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:49:19.068453 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 12:49:19.085236 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 12:49:19.097257 systemd-networkd[877]: lo: Link UP Jan 15 12:49:19.097261 systemd-networkd[877]: lo: Gained carrier Jan 15 12:49:19.098900 systemd-networkd[877]: Enumeration completed Jan 15 12:49:19.099292 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:49:19.111617 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:19.111621 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:19.112094 systemd[1]: Reached target network.target - Network. Jan 15 12:49:19.211953 kernel: mlx5_core 6fa3:00:02.0 enP28579s1: Link up Jan 15 12:49:19.252093 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: Data path switched to VF: enP28579s1 Jan 15 12:49:19.252454 systemd-networkd[877]: enP28579s1: Link UP Jan 15 12:49:19.252538 systemd-networkd[877]: eth0: Link UP Jan 15 12:49:19.252658 systemd-networkd[877]: eth0: Gained carrier Jan 15 12:49:19.252667 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:19.276679 systemd-networkd[877]: enP28579s1: Gained carrier Jan 15 12:49:19.290981 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:21.112197 systemd-networkd[877]: enP28579s1: Gained IPv6LL Jan 15 12:49:21.240038 systemd-networkd[877]: eth0: Gained IPv6LL Jan 15 12:49:22.249499 ignition[878]: Ignition 2.19.0 Jan 15 12:49:22.249513 ignition[878]: Stage: fetch-offline Jan 15 12:49:22.249550 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.268965 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:49:22.249558 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.249659 ignition[878]: parsed url from cmdline: "" Jan 15 12:49:22.249663 ignition[878]: no config URL provided Jan 15 12:49:22.249668 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:49:22.249677 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:49:22.249682 ignition[878]: failed to fetch config: resource requires networking Jan 15 12:49:22.309211 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 12:49:22.249905 ignition[878]: Ignition finished successfully Jan 15 12:49:22.340214 ignition[887]: Ignition 2.19.0 Jan 15 12:49:22.340220 ignition[887]: Stage: fetch Jan 15 12:49:22.340419 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.340432 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.340545 ignition[887]: parsed url from cmdline: "" Jan 15 12:49:22.340548 ignition[887]: no config URL provided Jan 15 12:49:22.340553 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 12:49:22.340560 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 15 12:49:22.340586 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 15 12:49:22.456082 ignition[887]: GET result: OK Jan 15 12:49:22.456150 ignition[887]: config has been read from IMDS userdata Jan 15 12:49:22.459774 unknown[887]: fetched base config from "system" Jan 15 12:49:22.456193 ignition[887]: parsing config with SHA512: c981a25d0c33fb0c9610170461785c1925a101e0d0c426922def9a5a8626b8a57878c064429ef178979e2c426f5cda149908bdc70fb287081181de24835cd1d2 Jan 15 12:49:22.459784 unknown[887]: fetched base config from "system" Jan 15 12:49:22.460127 ignition[887]: fetch: fetch complete Jan 15 12:49:22.459789 unknown[887]: fetched user config from "azure" Jan 15 12:49:22.460131 ignition[887]: fetch: fetch passed Jan 15 12:49:22.465903 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 12:49:22.460172 ignition[887]: Ignition finished successfully Jan 15 12:49:22.494235 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 12:49:22.521120 ignition[894]: Ignition 2.19.0 Jan 15 12:49:22.521126 ignition[894]: Stage: kargs Jan 15 12:49:22.530806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 12:49:22.521304 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.521313 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.522213 ignition[894]: kargs: kargs passed Jan 15 12:49:22.522262 ignition[894]: Ignition finished successfully Jan 15 12:49:22.567106 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 12:49:22.590444 ignition[901]: Ignition 2.19.0 Jan 15 12:49:22.590459 ignition[901]: Stage: disks Jan 15 12:49:22.590667 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:22.596832 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 12:49:22.590677 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:22.607092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 12:49:22.595344 ignition[901]: disks: disks passed Jan 15 12:49:22.620405 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 12:49:22.595397 ignition[901]: Ignition finished successfully Jan 15 12:49:22.635186 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:49:22.649889 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:49:22.665386 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:49:22.704189 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 12:49:22.780505 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 15 12:49:22.791523 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 12:49:22.812140 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 12:49:22.875993 kernel: EXT4-fs (sda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 15 12:49:22.876285 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 12:49:22.882368 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 12:49:22.933017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:49:22.955054 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 12:49:22.969769 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (921) Jan 15 12:49:22.969794 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:22.984078 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:22.990058 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:49:23.000627 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:49:22.999156 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 15 12:49:23.007183 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 12:49:23.007222 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:49:23.026682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:49:23.049962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 12:49:23.084208 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 12:49:23.591694 coreos-metadata[923]: Jan 15 12:49:23.591 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:49:23.601385 coreos-metadata[923]: Jan 15 12:49:23.601 INFO Fetch successful Jan 15 12:49:23.601385 coreos-metadata[923]: Jan 15 12:49:23.601 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:49:23.621511 coreos-metadata[923]: Jan 15 12:49:23.621 INFO Fetch successful Jan 15 12:49:23.638982 coreos-metadata[923]: Jan 15 12:49:23.638 INFO wrote hostname ci-4081.3.0-a-c63c213d7c to /sysroot/etc/hostname Jan 15 12:49:23.651123 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:49:23.893063 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 12:49:23.929116 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Jan 15 12:49:23.953801 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 12:49:23.964456 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 12:49:25.006647 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 12:49:25.025080 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 12:49:25.055866 kernel: BTRFS info (device sda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:25.035118 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 12:49:25.057791 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 12:49:25.082424 ignition[1039]: INFO : Ignition 2.19.0 Jan 15 12:49:25.082424 ignition[1039]: INFO : Stage: mount Jan 15 12:49:25.091950 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:25.091950 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:25.091950 ignition[1039]: INFO : mount: mount passed Jan 15 12:49:25.091950 ignition[1039]: INFO : Ignition finished successfully Jan 15 12:49:25.086984 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 12:49:25.099114 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 12:49:25.132128 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 12:49:25.146181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 12:49:25.190955 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1052) Jan 15 12:49:25.190994 kernel: BTRFS info (device sda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 15 12:49:25.198225 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 15 12:49:25.203513 kernel: BTRFS info (device sda6): using free space tree Jan 15 12:49:25.210964 kernel: BTRFS info (device sda6): auto enabling async discard Jan 15 12:49:25.213155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 12:49:25.247004 ignition[1070]: INFO : Ignition 2.19.0 Jan 15 12:49:25.247004 ignition[1070]: INFO : Stage: files Jan 15 12:49:25.256484 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:25.256484 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:25.256484 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Jan 15 12:49:25.279359 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 12:49:25.279359 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 12:49:25.315881 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 12:49:25.325547 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 12:49:25.325547 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 12:49:25.321466 unknown[1070]: wrote ssh authorized keys file for user: core Jan 15 12:49:25.349377 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:49:25.349377 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 15 12:49:25.395987 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 12:49:25.494509 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:25.507635 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 15 12:49:25.948513 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 12:49:26.141848 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 15 12:49:26.141848 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 12:49:26.189743 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 12:49:26.203286 ignition[1070]: INFO : files: files passed Jan 15 12:49:26.203286 ignition[1070]: INFO : Ignition finished successfully Jan 15 12:49:26.203477 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 12:49:26.241750 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 12:49:26.260141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 12:49:26.341093 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:49:26.341093 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:49:26.291604 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 12:49:26.374614 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 12:49:26.291712 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 12:49:26.318961 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:49:26.333520 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 12:49:26.375197 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 12:49:26.423768 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 12:49:26.423895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 12:49:26.439628 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 12:49:26.452856 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 12:49:26.468056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 12:49:26.482248 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 12:49:26.518008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:49:26.538181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 12:49:26.557537 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 12:49:26.557636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 12:49:26.573259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:49:26.588039 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:49:26.603614 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 12:49:26.616601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 12:49:26.616696 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 12:49:26.635509 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 12:49:26.649832 systemd[1]: Stopped target basic.target - Basic System. Jan 15 12:49:26.662011 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 12:49:26.676921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 12:49:26.691327 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 12:49:26.705600 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 12:49:26.719787 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 12:49:26.733790 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 12:49:26.749309 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 12:49:26.762229 systemd[1]: Stopped target swap.target - Swaps. Jan 15 12:49:26.773676 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 12:49:26.773750 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 12:49:26.791645 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:49:26.804476 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:49:26.818113 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 12:49:26.818166 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:49:26.835169 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 12:49:26.835237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 12:49:26.856094 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 12:49:26.856150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 12:49:26.869270 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 12:49:26.869317 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 12:49:26.881996 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 15 12:49:26.882047 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 15 12:49:26.970868 ignition[1122]: INFO : Ignition 2.19.0 Jan 15 12:49:26.970868 ignition[1122]: INFO : Stage: umount Jan 15 12:49:26.970868 ignition[1122]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 12:49:26.970868 ignition[1122]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 15 12:49:26.970868 ignition[1122]: INFO : umount: umount passed Jan 15 12:49:26.970868 ignition[1122]: INFO : Ignition finished successfully Jan 15 12:49:26.911121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 12:49:26.926376 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 12:49:26.941599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 12:49:26.941690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:49:26.963512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 12:49:26.963580 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 12:49:26.975881 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 12:49:26.980669 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 12:49:26.993997 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 12:49:26.994059 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 12:49:27.000868 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 12:49:27.000929 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 12:49:27.021555 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 12:49:27.021620 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 12:49:27.033534 systemd[1]: Stopped target network.target - Network. Jan 15 12:49:27.039152 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 12:49:27.039216 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 12:49:27.053603 systemd[1]: Stopped target paths.target - Path Units. Jan 15 12:49:27.068392 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 12:49:27.073972 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:49:27.083394 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 12:49:27.098420 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 12:49:27.117733 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 12:49:27.117785 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 12:49:27.129914 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 12:49:27.129971 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 12:49:27.143533 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 12:49:27.143590 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 12:49:27.155965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 12:49:27.156013 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 12:49:27.168727 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 12:49:27.180875 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 12:49:27.194448 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 12:49:27.206992 systemd-networkd[877]: eth0: DHCPv6 lease lost Jan 15 12:49:27.213169 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 12:49:27.213328 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 12:49:27.233291 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 12:49:27.233405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 12:49:27.247828 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 12:49:27.482741 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: Data path switched from VF: enP28579s1 Jan 15 12:49:27.247891 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:49:27.288073 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 12:49:27.300140 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 12:49:27.300215 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 12:49:27.313535 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 12:49:27.313587 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:49:27.326858 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 12:49:27.326906 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 12:49:27.339322 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 12:49:27.339374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:49:27.353020 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:49:27.387300 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 12:49:27.388458 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:49:27.401375 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 12:49:27.401447 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 12:49:27.412653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 12:49:27.412694 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:49:27.427414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 12:49:27.427476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 12:49:27.456696 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 12:49:27.456761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 12:49:27.469553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 12:49:27.469600 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 12:49:27.496169 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 12:49:27.512008 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 12:49:27.512076 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:49:27.528684 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 15 12:49:27.528744 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:49:27.542498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 12:49:27.542548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:49:27.556677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:27.556775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:27.570722 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 12:49:27.570824 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 12:49:27.592237 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 12:49:27.819459 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 15 12:49:27.592374 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 12:49:27.650335 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 12:49:27.650487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 12:49:27.661337 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 12:49:27.674132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 12:49:27.674194 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 12:49:27.718084 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 12:49:27.739994 systemd[1]: Switching root. Jan 15 12:49:27.866521 systemd-journald[217]: Journal stopped Jan 15 12:49:32.875489 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 12:49:32.875521 kernel: SELinux: policy capability open_perms=1 Jan 15 12:49:32.875533 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 12:49:32.875541 kernel: SELinux: policy capability always_check_network=0 Jan 15 12:49:32.875553 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 12:49:32.875561 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 12:49:32.875570 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 12:49:32.875578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 12:49:32.875586 kernel: audit: type=1403 audit(1736945369.033:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 12:49:32.875596 systemd[1]: Successfully loaded SELinux policy in 159.898ms. Jan 15 12:49:32.875608 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.127ms. Jan 15 12:49:32.875619 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 15 12:49:32.875628 systemd[1]: Detected virtualization microsoft. Jan 15 12:49:32.875636 systemd[1]: Detected architecture arm64. Jan 15 12:49:32.875646 systemd[1]: Detected first boot. Jan 15 12:49:32.875657 systemd[1]: Hostname set to . Jan 15 12:49:32.875667 systemd[1]: Initializing machine ID from random generator. Jan 15 12:49:32.875676 zram_generator::config[1163]: No configuration found. Jan 15 12:49:32.875686 systemd[1]: Populated /etc with preset unit settings. Jan 15 12:49:32.875695 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 12:49:32.875704 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 12:49:32.875714 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 12:49:32.875726 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 12:49:32.875736 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 12:49:32.875746 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 12:49:32.875755 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 12:49:32.875766 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 12:49:32.875776 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 12:49:32.875785 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 12:49:32.875796 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 12:49:32.875805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 12:49:32.875815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 12:49:32.875824 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 12:49:32.875833 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 12:49:32.875843 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 12:49:32.875853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 12:49:32.875862 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 15 12:49:32.875873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 12:49:32.875882 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 12:49:32.875892 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 12:49:32.875903 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 12:49:32.875913 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 12:49:32.875923 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 12:49:32.875948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 12:49:32.875960 systemd[1]: Reached target slices.target - Slice Units. Jan 15 12:49:32.875972 systemd[1]: Reached target swap.target - Swaps. Jan 15 12:49:32.875982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 12:49:32.875992 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 12:49:32.876002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 12:49:32.876011 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 12:49:32.876021 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 12:49:32.876032 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 12:49:32.876042 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 12:49:32.876052 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 12:49:32.876061 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 12:49:32.876071 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 12:49:32.876081 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 12:49:32.876090 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 12:49:32.876102 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 12:49:32.876112 systemd[1]: Reached target machines.target - Containers. Jan 15 12:49:32.876122 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 12:49:32.876132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:32.876141 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 12:49:32.876151 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 12:49:32.876161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:32.876170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:49:32.876182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:32.876191 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 12:49:32.876201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:32.876211 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 12:49:32.876221 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 12:49:32.876231 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 12:49:32.876240 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 12:49:32.876250 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 12:49:32.876261 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 12:49:32.876271 kernel: loop: module loaded Jan 15 12:49:32.876280 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 12:49:32.876289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 12:49:32.876299 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 12:49:32.876309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 12:49:32.876319 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 12:49:32.876328 systemd[1]: Stopped verity-setup.service. Jan 15 12:49:32.876338 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 12:49:32.876349 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 12:49:32.876358 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 12:49:32.876368 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 12:49:32.876378 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 12:49:32.876387 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 12:49:32.876397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 12:49:32.876407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:32.876417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:32.876427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:32.876438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:32.876448 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:32.876458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:32.876467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:49:32.876477 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 12:49:32.876510 systemd-journald[1242]: Collecting audit messages is disabled. Jan 15 12:49:32.876537 systemd-journald[1242]: Journal started Jan 15 12:49:32.876557 systemd-journald[1242]: Runtime Journal (/run/log/journal/4c65a478df6948b68823871bed357b45) is 8.0M, max 78.5M, 70.5M free. Jan 15 12:49:31.144613 systemd[1]: Queued start job for default target multi-user.target. Jan 15 12:49:31.309839 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 15 12:49:31.310233 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 12:49:31.310536 systemd[1]: systemd-journald.service: Consumed 3.730s CPU time. Jan 15 12:49:32.891639 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 12:49:32.903331 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 12:49:32.909491 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 12:49:32.909549 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 12:49:32.917067 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 15 12:49:32.928121 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 12:49:32.935746 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 12:49:32.942780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:32.944468 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 12:49:32.952622 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 12:49:32.959409 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:49:32.961123 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 12:49:32.976245 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 12:49:32.989974 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 12:49:32.999828 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 12:49:33.019449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 12:49:33.032144 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 15 12:49:33.047308 udevadm[1292]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 15 12:49:33.166346 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 12:49:33.260196 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 12:49:33.260903 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 12:49:33.273965 kernel: fuse: init (API version 7.39) Jan 15 12:49:33.274168 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 12:49:33.274351 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 12:49:33.280532 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 15 12:49:33.280826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 12:49:33.280964 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 15 12:49:33.287591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 12:49:33.303112 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 12:49:33.314625 systemd-journald[1242]: Time spent on flushing to /var/log/journal/4c65a478df6948b68823871bed357b45 is 16.140ms for 897 entries. Jan 15 12:49:33.314625 systemd-journald[1242]: System Journal (/var/log/journal/4c65a478df6948b68823871bed357b45) is 8.0M, max 2.6G, 2.6G free. Jan 15 12:49:33.661187 systemd-journald[1242]: Received client request to flush runtime journal. Jan 15 12:49:33.661229 kernel: ACPI: bus type drm_connector registered Jan 15 12:49:33.661243 kernel: loop0: detected capacity change from 0 to 31320 Jan 15 12:49:33.322848 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 12:49:33.331125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 12:49:33.346866 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 12:49:33.358770 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:49:33.358916 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:49:33.365268 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 12:49:33.372019 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 12:49:33.385088 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 12:49:33.640381 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 12:49:33.648159 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 12:49:33.662194 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 15 12:49:33.669887 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 12:49:33.705556 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 12:49:33.719120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 12:49:33.735174 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 15 12:49:33.735190 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 15 12:49:33.739848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 12:49:34.160017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 12:49:34.907164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 12:49:34.909963 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 15 12:49:36.022972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 12:49:36.085966 kernel: loop1: detected capacity change from 0 to 189592 Jan 15 12:49:36.257964 kernel: loop2: detected capacity change from 0 to 114432 Jan 15 12:49:39.805961 kernel: loop3: detected capacity change from 0 to 114328 Jan 15 12:49:40.186231 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 12:49:40.199086 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 12:49:40.217965 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 15 12:49:40.569725 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 12:49:40.587255 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 12:49:40.627242 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 15 12:49:40.915963 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 12:49:40.922124 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 12:49:40.951315 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 12:49:40.969992 kernel: hv_vmbus: registering driver hyperv_fb Jan 15 12:49:40.978957 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 15 12:49:40.979059 kernel: hv_vmbus: registering driver hv_balloon Jan 15 12:49:40.979095 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 15 12:49:40.986981 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 15 12:49:40.987069 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 15 12:49:40.996639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:41.001319 kernel: Console: switching to colour dummy device 80x25 Jan 15 12:49:41.002954 kernel: Console: switching to colour frame buffer device 128x48 Jan 15 12:49:41.020357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:41.022003 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:41.038303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:41.049712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 12:49:41.049912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:41.066093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 12:49:41.353856 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1336) Jan 15 12:49:41.370962 kernel: loop4: detected capacity change from 0 to 31320 Jan 15 12:49:41.391976 kernel: loop5: detected capacity change from 0 to 189592 Jan 15 12:49:41.398616 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 15 12:49:41.412497 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 12:49:41.419096 kernel: loop6: detected capacity change from 0 to 114432 Jan 15 12:49:41.431964 kernel: loop7: detected capacity change from 0 to 114328 Jan 15 12:49:41.437357 (sd-merge)[1408]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 15 12:49:41.437813 (sd-merge)[1408]: Merged extensions into '/usr'. Jan 15 12:49:41.441299 systemd[1]: Reloading requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 12:49:41.441424 systemd[1]: Reloading... Jan 15 12:49:41.509043 zram_generator::config[1443]: No configuration found. Jan 15 12:49:41.889498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:41.959670 systemd[1]: Reloading finished in 517 ms. Jan 15 12:49:41.989650 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 12:49:41.997601 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 12:49:42.012082 systemd[1]: Starting ensure-sysext.service... Jan 15 12:49:42.018084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 12:49:42.026298 systemd[1]: Reloading requested from client PID 1505 ('systemctl') (unit ensure-sysext.service)... Jan 15 12:49:42.026314 systemd[1]: Reloading... Jan 15 12:49:42.059146 systemd-networkd[1338]: lo: Link UP Jan 15 12:49:42.062352 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 12:49:42.062618 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 12:49:42.063295 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 12:49:42.063503 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Jan 15 12:49:42.063548 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Jan 15 12:49:42.063983 systemd-networkd[1338]: lo: Gained carrier Jan 15 12:49:42.066351 systemd-networkd[1338]: Enumeration completed Jan 15 12:49:42.068065 systemd-networkd[1338]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:42.068072 systemd-networkd[1338]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:42.070269 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:49:42.070499 systemd-tmpfiles[1506]: Skipping /boot Jan 15 12:49:42.080538 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 12:49:42.080686 systemd-tmpfiles[1506]: Skipping /boot Jan 15 12:49:42.146732 zram_generator::config[1537]: No configuration found. Jan 15 12:49:42.190955 kernel: mlx5_core 6fa3:00:02.0 enP28579s1: Link up Jan 15 12:49:42.248289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:42.288068 kernel: hv_netvsc 000d3afe-9486-000d-3afe-9486000d3afe eth0: Data path switched to VF: enP28579s1 Jan 15 12:49:42.288316 systemd-networkd[1338]: enP28579s1: Link UP Jan 15 12:49:42.288545 systemd-networkd[1338]: eth0: Link UP Jan 15 12:49:42.288549 systemd-networkd[1338]: eth0: Gained carrier Jan 15 12:49:42.288564 systemd-networkd[1338]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:42.292186 systemd-networkd[1338]: enP28579s1: Gained carrier Jan 15 12:49:42.301987 systemd-networkd[1338]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:42.322041 systemd[1]: Reloading finished in 295 ms. Jan 15 12:49:42.339528 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 12:49:42.346855 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 15 12:49:42.358415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 12:49:42.370974 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:49:42.381096 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 12:49:42.389098 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 15 12:49:42.397102 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 12:49:42.405158 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 12:49:42.416250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 12:49:42.428621 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 12:49:42.438924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:42.441583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:42.454604 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:42.470071 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:42.478313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:42.480297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:42.480430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:42.487783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:42.488174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:42.496080 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:42.496306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:42.511043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 12:49:42.519158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 12:49:42.526210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 12:49:42.533829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 12:49:42.544045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 12:49:42.550156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 12:49:42.550350 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 12:49:42.557219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 12:49:42.557389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 12:49:42.564322 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 12:49:42.564460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 12:49:42.572201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 12:49:42.572367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 12:49:42.579521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 12:49:42.579644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 12:49:42.589213 systemd[1]: Finished ensure-sysext.service. Jan 15 12:49:42.600008 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 12:49:42.600152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 12:49:42.602004 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 12:49:42.611984 lvm[1600]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:49:42.636441 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 15 12:49:42.643768 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 12:49:42.655066 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 15 12:49:42.665627 lvm[1633]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 15 12:49:42.695409 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 15 12:49:42.859031 systemd-resolved[1606]: Positive Trust Anchors: Jan 15 12:49:42.859048 systemd-resolved[1606]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 12:49:42.859079 systemd-resolved[1606]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 12:49:42.913617 systemd-resolved[1606]: Using system hostname 'ci-4081.3.0-a-c63c213d7c'. Jan 15 12:49:42.915680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 12:49:42.922446 systemd[1]: Reached target network.target - Network. Jan 15 12:49:42.927998 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 12:49:43.261424 augenrules[1638]: No rules Jan 15 12:49:43.262389 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:49:43.647297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 12:49:43.768060 systemd-networkd[1338]: eth0: Gained IPv6LL Jan 15 12:49:43.771001 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 12:49:43.778317 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 12:49:43.972326 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 12:49:44.216094 systemd-networkd[1338]: enP28579s1: Gained IPv6LL Jan 15 12:49:46.149631 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 12:49:46.158164 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 12:49:52.422956 ldconfig[1285]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 12:49:52.431436 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 12:49:52.443274 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 12:49:52.458137 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 12:49:52.466532 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 12:49:52.473380 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 12:49:52.480374 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 12:49:52.487769 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 12:49:52.493862 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 12:49:52.501119 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 12:49:52.508408 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 12:49:52.508442 systemd[1]: Reached target paths.target - Path Units. Jan 15 12:49:52.513617 systemd[1]: Reached target timers.target - Timer Units. Jan 15 12:49:52.536824 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 12:49:52.545484 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 12:49:52.583669 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 12:49:52.590285 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 12:49:52.596595 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 12:49:52.602125 systemd[1]: Reached target basic.target - Basic System. Jan 15 12:49:52.607337 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:49:52.607368 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 12:49:52.617050 systemd[1]: Starting chronyd.service - NTP client/server... Jan 15 12:49:52.625101 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 12:49:52.636096 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 12:49:52.643224 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 12:49:52.653092 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 12:49:52.657058 (chronyd)[1655]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 15 12:49:52.662318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 12:49:52.668881 jq[1659]: false Jan 15 12:49:52.669522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 12:49:52.669564 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 15 12:49:52.671144 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 15 12:49:52.683891 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 15 12:49:52.687194 chronyd[1667]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 15 12:49:52.699167 KVP[1663]: KVP starting; pid is:1663 Jan 15 12:49:52.693159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:49:52.704713 chronyd[1667]: Timezone right/UTC failed leap second check, ignoring Jan 15 12:49:52.704911 chronyd[1667]: Loaded seccomp filter (level 2) Jan 15 12:49:52.705432 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 12:49:52.715525 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 12:49:52.725797 extend-filesystems[1662]: Found loop4 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found loop5 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found loop6 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found loop7 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda1 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda2 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda3 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found usr Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda4 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda6 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda7 Jan 15 12:49:52.725797 extend-filesystems[1662]: Found sda9 Jan 15 12:49:52.725797 extend-filesystems[1662]: Checking size of /dev/sda9 Jan 15 12:49:52.865873 kernel: hv_utils: KVP IC version 4.0 Jan 15 12:49:52.753395 KVP[1663]: KVP LIC Version: 3.1 Jan 15 12:49:52.736142 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 12:49:52.866107 extend-filesystems[1662]: Old size kept for /dev/sda9 Jan 15 12:49:52.866107 extend-filesystems[1662]: Found sr0 Jan 15 12:49:52.749160 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 12:49:52.771559 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 12:49:52.797715 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 12:49:52.816663 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 12:49:52.817282 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 12:49:52.825270 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 12:49:52.879080 jq[1694]: true Jan 15 12:49:52.851160 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 12:49:52.870250 systemd[1]: Started chronyd.service - NTP client/server. Jan 15 12:49:52.892536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 12:49:52.895034 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 12:49:52.895336 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 12:49:52.895480 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 12:49:52.911543 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 12:49:52.911736 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 12:49:52.922539 dbus-daemon[1658]: [system] SELinux support is enabled Jan 15 12:49:52.922817 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 12:49:52.934397 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 12:49:52.957055 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 12:49:52.957991 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 12:49:52.978879 systemd-logind[1685]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 15 12:49:52.980655 update_engine[1688]: I20250115 12:49:52.973891 1688 main.cc:92] Flatcar Update Engine starting Jan 15 12:49:52.982036 systemd-logind[1685]: New seat seat0. Jan 15 12:49:52.991048 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 12:49:53.000333 update_engine[1688]: I20250115 12:49:53.000269 1688 update_check_scheduler.cc:74] Next update check in 3m51s Jan 15 12:49:53.007358 jq[1711]: true Jan 15 12:49:53.026973 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1702) Jan 15 12:49:53.027023 coreos-metadata[1657]: Jan 15 12:49:53.026 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 15 12:49:53.029957 coreos-metadata[1657]: Jan 15 12:49:53.029 INFO Fetch successful Jan 15 12:49:53.030327 coreos-metadata[1657]: Jan 15 12:49:53.030 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 15 12:49:53.036057 (ntainerd)[1712]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 12:49:53.036567 coreos-metadata[1657]: Jan 15 12:49:53.035 INFO Fetch successful Jan 15 12:49:53.036834 coreos-metadata[1657]: Jan 15 12:49:53.036 INFO Fetching http://168.63.129.16/machine/e8e936a9-5f65-495a-914e-35f926246166/4f290a36%2D4148%2D4494%2Da3a4%2D0c03649983b1.%5Fci%2D4081.3.0%2Da%2Dc63c213d7c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 15 12:49:53.042023 coreos-metadata[1657]: Jan 15 12:49:53.039 INFO Fetch successful Jan 15 12:49:53.042023 coreos-metadata[1657]: Jan 15 12:49:53.040 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 15 12:49:53.051364 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 12:49:53.051400 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 12:49:53.052329 dbus-daemon[1658]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 12:49:53.058677 coreos-metadata[1657]: Jan 15 12:49:53.057 INFO Fetch successful Jan 15 12:49:53.067538 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 12:49:53.067566 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 12:49:53.117440 tar[1710]: linux-arm64/helm Jan 15 12:49:53.120264 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 12:49:53.134341 systemd[1]: Started update-engine.service - Update Engine. Jan 15 12:49:53.152900 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 12:49:53.165215 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 12:49:53.213622 bash[1772]: Updated "/home/core/.ssh/authorized_keys" Jan 15 12:49:53.217130 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 12:49:53.229791 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 12:49:53.403579 sshd_keygen[1689]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 12:49:53.437134 locksmithd[1773]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 12:49:53.440061 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 12:49:53.457249 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 12:49:53.468193 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 15 12:49:53.480506 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 12:49:53.480679 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 12:49:53.495130 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 12:49:53.538141 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 15 12:49:53.546974 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 12:49:53.563465 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 12:49:53.571244 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 15 12:49:53.581569 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 12:49:53.682741 tar[1710]: linux-arm64/LICENSE Jan 15 12:49:53.683146 tar[1710]: linux-arm64/README.md Jan 15 12:49:53.696281 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 12:49:53.830828 containerd[1712]: time="2025-01-15T12:49:53.830450160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 15 12:49:53.866652 containerd[1712]: time="2025-01-15T12:49:53.866553600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:53.867973 containerd[1712]: time="2025-01-15T12:49:53.867899800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:53.868647 containerd[1712]: time="2025-01-15T12:49:53.868621200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 15 12:49:53.868672 containerd[1712]: time="2025-01-15T12:49:53.868652720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 15 12:49:53.868843 containerd[1712]: time="2025-01-15T12:49:53.868822400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 15 12:49:53.868875 containerd[1712]: time="2025-01-15T12:49:53.868848560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:53.868949 containerd[1712]: time="2025-01-15T12:49:53.868919560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:53.868971 containerd[1712]: time="2025-01-15T12:49:53.868953960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869150 containerd[1712]: time="2025-01-15T12:49:53.869127560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869171 containerd[1712]: time="2025-01-15T12:49:53.869148880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869171 containerd[1712]: time="2025-01-15T12:49:53.869164080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869216 containerd[1712]: time="2025-01-15T12:49:53.869173560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869258 containerd[1712]: time="2025-01-15T12:49:53.869241720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869454 containerd[1712]: time="2025-01-15T12:49:53.869433760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869561 containerd[1712]: time="2025-01-15T12:49:53.869539360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 15 12:49:53.869586 containerd[1712]: time="2025-01-15T12:49:53.869559840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 15 12:49:53.869667 containerd[1712]: time="2025-01-15T12:49:53.869647520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 15 12:49:53.869717 containerd[1712]: time="2025-01-15T12:49:53.869700160Z" level=info msg="metadata content store policy set" policy=shared Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884229240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884300120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884319680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884335560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884349600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884529240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884750040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884837520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884852600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884866840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884880880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884894240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884907080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885068 containerd[1712]: time="2025-01-15T12:49:53.884920600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885424 containerd[1712]: time="2025-01-15T12:49:53.884955360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885424 containerd[1712]: time="2025-01-15T12:49:53.884971120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885424 containerd[1712]: time="2025-01-15T12:49:53.884983560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885424 containerd[1712]: time="2025-01-15T12:49:53.884996480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 15 12:49:53.885424 containerd[1712]: time="2025-01-15T12:49:53.885016720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.885424 containerd[1712]: time="2025-01-15T12:49:53.885030600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.885424 containerd[1712]: time="2025-01-15T12:49:53.885042960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.885636 containerd[1712]: time="2025-01-15T12:49:53.885618040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.885695 containerd[1712]: time="2025-01-15T12:49:53.885682520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.885759 containerd[1712]: time="2025-01-15T12:49:53.885746200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.885811 containerd[1712]: time="2025-01-15T12:49:53.885799440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.885867 containerd[1712]: time="2025-01-15T12:49:53.885854360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.885906680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.885956960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.885977440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.885990000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.886003520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.886026200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.886053280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.886065920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886133 containerd[1712]: time="2025-01-15T12:49:53.886076760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 15 12:49:53.886387 containerd[1712]: time="2025-01-15T12:49:53.886333200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 15 12:49:53.886387 containerd[1712]: time="2025-01-15T12:49:53.886360080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 15 12:49:53.886953 containerd[1712]: time="2025-01-15T12:49:53.886373600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 15 12:49:53.886953 containerd[1712]: time="2025-01-15T12:49:53.886528080Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 15 12:49:53.886953 containerd[1712]: time="2025-01-15T12:49:53.886540000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.886953 containerd[1712]: time="2025-01-15T12:49:53.886552760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 15 12:49:53.886953 containerd[1712]: time="2025-01-15T12:49:53.886564280Z" level=info msg="NRI interface is disabled by configuration." Jan 15 12:49:53.886953 containerd[1712]: time="2025-01-15T12:49:53.886581600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 15 12:49:53.887119 containerd[1712]: time="2025-01-15T12:49:53.886863880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 15 12:49:53.887119 containerd[1712]: time="2025-01-15T12:49:53.886921080Z" level=info msg="Connect containerd service" Jan 15 12:49:53.887287 containerd[1712]: time="2025-01-15T12:49:53.887268640Z" level=info msg="using legacy CRI server" Jan 15 12:49:53.887336 containerd[1712]: time="2025-01-15T12:49:53.887323200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 12:49:53.887469 containerd[1712]: time="2025-01-15T12:49:53.887454960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 15 12:49:53.888192 containerd[1712]: time="2025-01-15T12:49:53.888161920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 12:49:53.888772 containerd[1712]: time="2025-01-15T12:49:53.888392280Z" level=info msg="Start subscribing containerd event" Jan 15 12:49:53.888772 containerd[1712]: time="2025-01-15T12:49:53.888445680Z" level=info msg="Start recovering state" Jan 15 12:49:53.888772 containerd[1712]: time="2025-01-15T12:49:53.888509640Z" level=info msg="Start event monitor" Jan 15 12:49:53.888772 containerd[1712]: time="2025-01-15T12:49:53.888519720Z" level=info msg="Start snapshots syncer" Jan 15 12:49:53.888772 containerd[1712]: time="2025-01-15T12:49:53.888528960Z" level=info msg="Start cni network conf syncer for default" Jan 15 12:49:53.888772 containerd[1712]: time="2025-01-15T12:49:53.888536240Z" level=info msg="Start streaming server" Jan 15 12:49:53.889128 containerd[1712]: time="2025-01-15T12:49:53.889110080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 12:49:53.889235 containerd[1712]: time="2025-01-15T12:49:53.889223360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 12:49:53.889488 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 12:49:53.890227 containerd[1712]: time="2025-01-15T12:49:53.889618240Z" level=info msg="containerd successfully booted in 0.060060s" Jan 15 12:49:53.975465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:49:53.982722 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:49:53.983790 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 12:49:53.994033 systemd[1]: Startup finished in 718ms (kernel) + 14.755s (initrd) + 25.119s (userspace) = 40.593s. Jan 15 12:49:54.295391 login[1806]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:49:54.300506 login[1807]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:49:54.307626 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 12:49:54.318402 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 12:49:54.321575 systemd-logind[1685]: New session 1 of user core. Jan 15 12:49:54.327238 systemd-logind[1685]: New session 2 of user core. Jan 15 12:49:54.335366 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 12:49:54.343373 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 12:49:54.346156 (systemd)[1832]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 12:49:54.457029 kubelet[1820]: E0115 12:49:54.456928 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:49:54.459233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:49:54.459355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:49:54.509843 systemd[1832]: Queued start job for default target default.target. Jan 15 12:49:54.516816 systemd[1832]: Created slice app.slice - User Application Slice. Jan 15 12:49:54.516849 systemd[1832]: Reached target paths.target - Paths. Jan 15 12:49:54.516861 systemd[1832]: Reached target timers.target - Timers. Jan 15 12:49:54.518094 systemd[1832]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 12:49:54.528296 systemd[1832]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 12:49:54.528363 systemd[1832]: Reached target sockets.target - Sockets. Jan 15 12:49:54.528376 systemd[1832]: Reached target basic.target - Basic System. Jan 15 12:49:54.528417 systemd[1832]: Reached target default.target - Main User Target. Jan 15 12:49:54.528444 systemd[1832]: Startup finished in 176ms. Jan 15 12:49:54.528698 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 12:49:54.531180 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 12:49:54.531790 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 12:49:55.281621 waagent[1804]: 2025-01-15T12:49:55.281528Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 15 12:49:55.287568 waagent[1804]: 2025-01-15T12:49:55.287495Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 15 12:49:55.292249 waagent[1804]: 2025-01-15T12:49:55.292192Z INFO Daemon Daemon Python: 3.11.9 Jan 15 12:49:55.296877 waagent[1804]: 2025-01-15T12:49:55.296820Z INFO Daemon Daemon Run daemon Jan 15 12:49:55.301061 waagent[1804]: 2025-01-15T12:49:55.301018Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 15 12:49:55.310285 waagent[1804]: 2025-01-15T12:49:55.310210Z INFO Daemon Daemon Using waagent for provisioning Jan 15 12:49:55.315791 waagent[1804]: 2025-01-15T12:49:55.315741Z INFO Daemon Daemon Activate resource disk Jan 15 12:49:55.320578 waagent[1804]: 2025-01-15T12:49:55.320524Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 15 12:49:55.332252 waagent[1804]: 2025-01-15T12:49:55.332192Z INFO Daemon Daemon Found device: None Jan 15 12:49:55.336829 waagent[1804]: 2025-01-15T12:49:55.336780Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 15 12:49:55.345825 waagent[1804]: 2025-01-15T12:49:55.345761Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 15 12:49:55.359184 waagent[1804]: 2025-01-15T12:49:55.359125Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:49:55.365175 waagent[1804]: 2025-01-15T12:49:55.365122Z INFO Daemon Daemon Running default provisioning handler Jan 15 12:49:55.377429 waagent[1804]: 2025-01-15T12:49:55.376871Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 15 12:49:55.391423 waagent[1804]: 2025-01-15T12:49:55.391355Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 15 12:49:55.401310 waagent[1804]: 2025-01-15T12:49:55.401246Z INFO Daemon Daemon cloud-init is enabled: False Jan 15 12:49:55.406569 waagent[1804]: 2025-01-15T12:49:55.406512Z INFO Daemon Daemon Copying ovf-env.xml Jan 15 12:49:55.525280 waagent[1804]: 2025-01-15T12:49:55.525039Z INFO Daemon Daemon Successfully mounted dvd Jan 15 12:49:55.540313 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 15 12:49:55.542801 waagent[1804]: 2025-01-15T12:49:55.542715Z INFO Daemon Daemon Detect protocol endpoint Jan 15 12:49:55.548235 waagent[1804]: 2025-01-15T12:49:55.548167Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 15 12:49:55.554355 waagent[1804]: 2025-01-15T12:49:55.554287Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 15 12:49:55.561322 waagent[1804]: 2025-01-15T12:49:55.561267Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 15 12:49:55.567172 waagent[1804]: 2025-01-15T12:49:55.567119Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 15 12:49:55.572339 waagent[1804]: 2025-01-15T12:49:55.572286Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 15 12:49:55.633426 waagent[1804]: 2025-01-15T12:49:55.633378Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 15 12:49:55.640375 waagent[1804]: 2025-01-15T12:49:55.640345Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 15 12:49:55.646532 waagent[1804]: 2025-01-15T12:49:55.646484Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 15 12:49:55.879798 waagent[1804]: 2025-01-15T12:49:55.879643Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 15 12:49:55.886771 waagent[1804]: 2025-01-15T12:49:55.886704Z INFO Daemon Daemon Forcing an update of the goal state. Jan 15 12:49:55.896551 waagent[1804]: 2025-01-15T12:49:55.896497Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:49:55.959654 waagent[1804]: 2025-01-15T12:49:55.959604Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 15 12:49:55.966095 waagent[1804]: 2025-01-15T12:49:55.966038Z INFO Daemon Jan 15 12:49:55.969140 waagent[1804]: 2025-01-15T12:49:55.969096Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 32bb79ac-7ec8-49b1-b11c-397d3e31d19a eTag: 11341813397142433863 source: Fabric] Jan 15 12:49:55.981476 waagent[1804]: 2025-01-15T12:49:55.981427Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 15 12:49:55.989128 waagent[1804]: 2025-01-15T12:49:55.989078Z INFO Daemon Jan 15 12:49:55.992357 waagent[1804]: 2025-01-15T12:49:55.992312Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:49:56.004404 waagent[1804]: 2025-01-15T12:49:56.004364Z INFO Daemon Daemon Downloading artifacts profile blob Jan 15 12:49:56.086273 waagent[1804]: 2025-01-15T12:49:56.086177Z INFO Daemon Downloaded certificate {'thumbprint': 'C23ED76B0590F40983CF688D020C2A792A809F18', 'hasPrivateKey': False} Jan 15 12:49:56.097771 waagent[1804]: 2025-01-15T12:49:56.097713Z INFO Daemon Downloaded certificate {'thumbprint': '3AA81DF96C05DF8572BFD20E901C840007F51096', 'hasPrivateKey': True} Jan 15 12:49:56.109149 waagent[1804]: 2025-01-15T12:49:56.109093Z INFO Daemon Fetch goal state completed Jan 15 12:49:56.121485 waagent[1804]: 2025-01-15T12:49:56.121438Z INFO Daemon Daemon Starting provisioning Jan 15 12:49:56.126945 waagent[1804]: 2025-01-15T12:49:56.126878Z INFO Daemon Daemon Handle ovf-env.xml. Jan 15 12:49:56.132014 waagent[1804]: 2025-01-15T12:49:56.131921Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-c63c213d7c] Jan 15 12:49:56.155315 waagent[1804]: 2025-01-15T12:49:56.155240Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-c63c213d7c] Jan 15 12:49:56.162496 waagent[1804]: 2025-01-15T12:49:56.162429Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 15 12:49:56.169177 waagent[1804]: 2025-01-15T12:49:56.169122Z INFO Daemon Daemon Primary interface is [eth0] Jan 15 12:49:56.205915 systemd-networkd[1338]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 12:49:56.205924 systemd-networkd[1338]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 12:49:56.205979 systemd-networkd[1338]: eth0: DHCP lease lost Jan 15 12:49:56.208054 waagent[1804]: 2025-01-15T12:49:56.207093Z INFO Daemon Daemon Create user account if not exists Jan 15 12:49:56.214884 waagent[1804]: 2025-01-15T12:49:56.214813Z INFO Daemon Daemon User core already exists, skip useradd Jan 15 12:49:56.215029 systemd-networkd[1338]: eth0: DHCPv6 lease lost Jan 15 12:49:56.221276 waagent[1804]: 2025-01-15T12:49:56.221193Z INFO Daemon Daemon Configure sudoer Jan 15 12:49:56.226647 waagent[1804]: 2025-01-15T12:49:56.226581Z INFO Daemon Daemon Configure sshd Jan 15 12:49:56.231348 waagent[1804]: 2025-01-15T12:49:56.231277Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 15 12:49:56.244553 waagent[1804]: 2025-01-15T12:49:56.244489Z INFO Daemon Daemon Deploy ssh public key. Jan 15 12:49:56.259010 systemd-networkd[1338]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 15 12:49:57.341081 waagent[1804]: 2025-01-15T12:49:57.341019Z INFO Daemon Daemon Provisioning complete Jan 15 12:49:57.362096 waagent[1804]: 2025-01-15T12:49:57.362048Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 15 12:49:57.368855 waagent[1804]: 2025-01-15T12:49:57.368791Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 15 12:49:57.379242 waagent[1804]: 2025-01-15T12:49:57.379183Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 15 12:49:57.511617 waagent[1890]: 2025-01-15T12:49:57.511544Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 15 12:49:57.512574 waagent[1890]: 2025-01-15T12:49:57.512060Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 15 12:49:57.512574 waagent[1890]: 2025-01-15T12:49:57.512134Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 15 12:49:57.554959 waagent[1890]: 2025-01-15T12:49:57.552782Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 15 12:49:57.554959 waagent[1890]: 2025-01-15T12:49:57.553044Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:57.554959 waagent[1890]: 2025-01-15T12:49:57.553112Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:57.561713 waagent[1890]: 2025-01-15T12:49:57.561643Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 15 12:49:57.567892 waagent[1890]: 2025-01-15T12:49:57.567844Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 15 12:49:57.568564 waagent[1890]: 2025-01-15T12:49:57.568524Z INFO ExtHandler Jan 15 12:49:57.568709 waagent[1890]: 2025-01-15T12:49:57.568678Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ca642fd9-a820-4fae-9f49-e73e4c8fe217 eTag: 11341813397142433863 source: Fabric] Jan 15 12:49:57.569123 waagent[1890]: 2025-01-15T12:49:57.569082Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 15 12:49:57.569775 waagent[1890]: 2025-01-15T12:49:57.569733Z INFO ExtHandler Jan 15 12:49:57.569910 waagent[1890]: 2025-01-15T12:49:57.569879Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 15 12:49:57.574025 waagent[1890]: 2025-01-15T12:49:57.573988Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 15 12:49:57.662621 waagent[1890]: 2025-01-15T12:49:57.662498Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C23ED76B0590F40983CF688D020C2A792A809F18', 'hasPrivateKey': False} Jan 15 12:49:57.663221 waagent[1890]: 2025-01-15T12:49:57.663180Z INFO ExtHandler Downloaded certificate {'thumbprint': '3AA81DF96C05DF8572BFD20E901C840007F51096', 'hasPrivateKey': True} Jan 15 12:49:57.663780 waagent[1890]: 2025-01-15T12:49:57.663722Z INFO ExtHandler Fetch goal state completed Jan 15 12:49:57.680980 waagent[1890]: 2025-01-15T12:49:57.680900Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1890 Jan 15 12:49:57.681264 waagent[1890]: 2025-01-15T12:49:57.681228Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 15 12:49:57.683019 waagent[1890]: 2025-01-15T12:49:57.682977Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 15 12:49:57.683481 waagent[1890]: 2025-01-15T12:49:57.683445Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 15 12:49:57.718103 waagent[1890]: 2025-01-15T12:49:57.718064Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 15 12:49:57.718439 waagent[1890]: 2025-01-15T12:49:57.718402Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 15 12:49:57.724562 waagent[1890]: 2025-01-15T12:49:57.724529Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 15 12:49:57.731112 systemd[1]: Reloading requested from client PID 1905 ('systemctl') (unit waagent.service)... Jan 15 12:49:57.731130 systemd[1]: Reloading... Jan 15 12:49:57.809524 zram_generator::config[1937]: No configuration found. Jan 15 12:49:57.916431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:57.991638 systemd[1]: Reloading finished in 260 ms. Jan 15 12:49:58.017494 waagent[1890]: 2025-01-15T12:49:58.016305Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 15 12:49:58.022680 systemd[1]: Reloading requested from client PID 1993 ('systemctl') (unit waagent.service)... Jan 15 12:49:58.022825 systemd[1]: Reloading... Jan 15 12:49:58.082963 zram_generator::config[2023]: No configuration found. Jan 15 12:49:58.207127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:49:58.282690 systemd[1]: Reloading finished in 259 ms. Jan 15 12:49:58.305905 waagent[1890]: 2025-01-15T12:49:58.305102Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 15 12:49:58.305905 waagent[1890]: 2025-01-15T12:49:58.305273Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 15 12:49:58.843982 waagent[1890]: 2025-01-15T12:49:58.843677Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 15 12:49:58.844387 waagent[1890]: 2025-01-15T12:49:58.844330Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 15 12:49:58.845196 waagent[1890]: 2025-01-15T12:49:58.845136Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 15 12:49:58.845686 waagent[1890]: 2025-01-15T12:49:58.845562Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 15 12:49:58.846772 waagent[1890]: 2025-01-15T12:49:58.845902Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:58.846772 waagent[1890]: 2025-01-15T12:49:58.846034Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:58.846772 waagent[1890]: 2025-01-15T12:49:58.846170Z INFO EnvHandler ExtHandler Configure routes Jan 15 12:49:58.846772 waagent[1890]: 2025-01-15T12:49:58.846231Z INFO EnvHandler ExtHandler Gateway:None Jan 15 12:49:58.846772 waagent[1890]: 2025-01-15T12:49:58.846273Z INFO EnvHandler ExtHandler Routes:None Jan 15 12:49:58.847086 waagent[1890]: 2025-01-15T12:49:58.847035Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 15 12:49:58.847322 waagent[1890]: 2025-01-15T12:49:58.847285Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 15 12:49:58.847459 waagent[1890]: 2025-01-15T12:49:58.847426Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 15 12:49:58.847754 waagent[1890]: 2025-01-15T12:49:58.847714Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 15 12:49:58.848066 waagent[1890]: 2025-01-15T12:49:58.848020Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 15 12:49:58.848066 waagent[1890]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 15 12:49:58.848066 waagent[1890]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 15 12:49:58.848066 waagent[1890]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 15 12:49:58.848066 waagent[1890]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:58.848066 waagent[1890]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:58.848066 waagent[1890]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 15 12:49:58.848714 waagent[1890]: 2025-01-15T12:49:58.848641Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 15 12:49:58.849033 waagent[1890]: 2025-01-15T12:49:58.848991Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 15 12:49:58.849451 waagent[1890]: 2025-01-15T12:49:58.848921Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 15 12:49:58.849918 waagent[1890]: 2025-01-15T12:49:58.849523Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 15 12:49:58.860531 waagent[1890]: 2025-01-15T12:49:58.860471Z INFO ExtHandler ExtHandler Jan 15 12:49:58.860650 waagent[1890]: 2025-01-15T12:49:58.860597Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: db86f7ca-cce0-4c2c-a182-0f7de06ceb56 correlation bc664f64-f6bc-47a2-a0d1-09aa4caf1dad created: 2025-01-15T12:48:24.623695Z] Jan 15 12:49:58.861051 waagent[1890]: 2025-01-15T12:49:58.861003Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 15 12:49:58.861646 waagent[1890]: 2025-01-15T12:49:58.861604Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 15 12:49:58.945435 waagent[1890]: 2025-01-15T12:49:58.944904Z INFO MonitorHandler ExtHandler Network interfaces: Jan 15 12:49:58.945435 waagent[1890]: Executing ['ip', '-a', '-o', 'link']: Jan 15 12:49:58.945435 waagent[1890]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 15 12:49:58.945435 waagent[1890]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fe:94:86 brd ff:ff:ff:ff:ff:ff Jan 15 12:49:58.945435 waagent[1890]: 3: enP28579s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fe:94:86 brd ff:ff:ff:ff:ff:ff\ altname enP28579p0s2 Jan 15 12:49:58.945435 waagent[1890]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 15 12:49:58.945435 waagent[1890]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 15 12:49:58.945435 waagent[1890]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 15 12:49:58.945435 waagent[1890]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 15 12:49:58.945435 waagent[1890]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 15 12:49:58.945435 waagent[1890]: 2: eth0 inet6 fe80::20d:3aff:fefe:9486/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:49:58.945435 waagent[1890]: 3: enP28579s1 inet6 fe80::20d:3aff:fefe:9486/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 15 12:49:59.333972 waagent[1890]: 2025-01-15T12:49:59.333698Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 15 12:49:59.333972 waagent[1890]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:59.333972 waagent[1890]: pkts bytes target prot opt in out source destination Jan 15 12:49:59.333972 waagent[1890]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:59.333972 waagent[1890]: pkts bytes target prot opt in out source destination Jan 15 12:49:59.333972 waagent[1890]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:59.333972 waagent[1890]: pkts bytes target prot opt in out source destination Jan 15 12:49:59.333972 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:49:59.333972 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:49:59.333972 waagent[1890]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:49:59.336842 waagent[1890]: 2025-01-15T12:49:59.336766Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 15 12:49:59.336842 waagent[1890]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:59.336842 waagent[1890]: pkts bytes target prot opt in out source destination Jan 15 12:49:59.336842 waagent[1890]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:59.336842 waagent[1890]: pkts bytes target prot opt in out source destination Jan 15 12:49:59.336842 waagent[1890]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 15 12:49:59.336842 waagent[1890]: pkts bytes target prot opt in out source destination Jan 15 12:49:59.336842 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 15 12:49:59.336842 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 15 12:49:59.336842 waagent[1890]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 15 12:49:59.337161 waagent[1890]: 2025-01-15T12:49:59.337091Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 15 12:49:59.690144 waagent[1890]: 2025-01-15T12:49:59.690012Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C34C87FB-F67B-4599-BBC0-A8BC90D70F78;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 15 12:50:04.520581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 12:50:04.533209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:04.768430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:04.772284 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:04.817509 kubelet[2120]: E0115 12:50:04.817416 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:04.820580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:04.820735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:15.020710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 12:50:15.029103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:15.265100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:15.268532 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:15.304008 kubelet[2136]: E0115 12:50:15.303884 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:15.305783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:15.305947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:16.501845 chronyd[1667]: Selected source PHC0 Jan 15 12:50:25.520633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 15 12:50:25.529128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:25.784206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:25.796204 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:25.827034 kubelet[2151]: E0115 12:50:25.826981 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:25.829352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:25.829586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:29.131762 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 15 12:50:32.354953 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 12:50:32.363152 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:49656.service - OpenSSH per-connection server daemon (10.200.16.10:49656). Jan 15 12:50:32.868431 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 49656 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:32.869733 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:32.873405 systemd-logind[1685]: New session 3 of user core. Jan 15 12:50:32.885045 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 12:50:33.268907 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:49672.service - OpenSSH per-connection server daemon (10.200.16.10:49672). Jan 15 12:50:33.696368 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 49672 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:33.697742 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:33.702598 systemd-logind[1685]: New session 4 of user core. Jan 15 12:50:33.705156 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 12:50:34.018170 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:34.022328 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:49672.service: Deactivated successfully. Jan 15 12:50:34.023899 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 12:50:34.025646 systemd-logind[1685]: Session 4 logged out. Waiting for processes to exit. Jan 15 12:50:34.026905 systemd-logind[1685]: Removed session 4. Jan 15 12:50:34.099836 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:49678.service - OpenSSH per-connection server daemon (10.200.16.10:49678). Jan 15 12:50:34.560243 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 49678 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:34.562157 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:34.565847 systemd-logind[1685]: New session 5 of user core. Jan 15 12:50:34.574110 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 12:50:34.905173 sshd[2171]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:34.908671 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:49678.service: Deactivated successfully. Jan 15 12:50:34.910156 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 12:50:34.910732 systemd-logind[1685]: Session 5 logged out. Waiting for processes to exit. Jan 15 12:50:34.911615 systemd-logind[1685]: Removed session 5. Jan 15 12:50:34.989120 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:49682.service - OpenSSH per-connection server daemon (10.200.16.10:49682). Jan 15 12:50:35.455766 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 49682 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:35.457149 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:35.461013 systemd-logind[1685]: New session 6 of user core. Jan 15 12:50:35.469085 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 12:50:35.808171 sshd[2178]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:35.811754 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:49682.service: Deactivated successfully. Jan 15 12:50:35.814366 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 12:50:35.815556 systemd-logind[1685]: Session 6 logged out. Waiting for processes to exit. Jan 15 12:50:35.816674 systemd-logind[1685]: Removed session 6. Jan 15 12:50:35.887643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 15 12:50:35.890443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:35.895047 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:45766.service - OpenSSH per-connection server daemon (10.200.16.10:45766). Jan 15 12:50:35.992780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:35.997280 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:36.030398 kubelet[2195]: E0115 12:50:36.030327 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:36.032794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:36.033076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:36.320868 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 45766 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:36.322276 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:36.326641 systemd-logind[1685]: New session 7 of user core. Jan 15 12:50:36.335143 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 12:50:36.686694 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 12:50:36.687010 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:36.706234 sudo[2202]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:36.782184 sshd[2186]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:36.786055 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:45766.service: Deactivated successfully. Jan 15 12:50:36.789386 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 12:50:36.790051 systemd-logind[1685]: Session 7 logged out. Waiting for processes to exit. Jan 15 12:50:36.791310 systemd-logind[1685]: Removed session 7. Jan 15 12:50:36.859509 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:45782.service - OpenSSH per-connection server daemon (10.200.16.10:45782). Jan 15 12:50:37.287474 sshd[2207]: Accepted publickey for core from 10.200.16.10 port 45782 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:37.288873 sshd[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:37.293675 systemd-logind[1685]: New session 8 of user core. Jan 15 12:50:37.299129 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 12:50:37.533726 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 12:50:37.534354 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:37.537695 sudo[2211]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:37.542648 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 15 12:50:37.542929 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:37.563201 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 15 12:50:37.564514 auditctl[2214]: No rules Jan 15 12:50:37.564820 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 12:50:37.565030 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 15 12:50:37.567268 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 15 12:50:37.591626 augenrules[2232]: No rules Jan 15 12:50:37.592823 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 15 12:50:37.594222 sudo[2210]: pam_unix(sudo:session): session closed for user root Jan 15 12:50:37.671163 sshd[2207]: pam_unix(sshd:session): session closed for user core Jan 15 12:50:37.673683 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:45782.service: Deactivated successfully. Jan 15 12:50:37.675372 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 12:50:37.676680 systemd-logind[1685]: Session 8 logged out. Waiting for processes to exit. Jan 15 12:50:37.677591 systemd-logind[1685]: Removed session 8. Jan 15 12:50:37.748917 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:45784.service - OpenSSH per-connection server daemon (10.200.16.10:45784). Jan 15 12:50:38.181142 sshd[2240]: Accepted publickey for core from 10.200.16.10 port 45784 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:50:38.182464 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:50:38.186598 systemd-logind[1685]: New session 9 of user core. Jan 15 12:50:38.198082 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 12:50:38.402556 update_engine[1688]: I20250115 12:50:38.401862 1688 update_attempter.cc:509] Updating boot flags... Jan 15 12:50:38.430495 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 12:50:38.430807 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 12:50:38.475604 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2260) Jan 15 12:50:38.574977 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2261) Jan 15 12:50:39.357313 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 12:50:39.357355 (dockerd)[2324]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 12:50:39.903526 dockerd[2324]: time="2025-01-15T12:50:39.903468857Z" level=info msg="Starting up" Jan 15 12:50:40.328782 dockerd[2324]: time="2025-01-15T12:50:40.328722483Z" level=info msg="Loading containers: start." Jan 15 12:50:40.464963 kernel: Initializing XFRM netlink socket Jan 15 12:50:40.582406 systemd-networkd[1338]: docker0: Link UP Jan 15 12:50:40.601638 dockerd[2324]: time="2025-01-15T12:50:40.601365160Z" level=info msg="Loading containers: done." Jan 15 12:50:40.611819 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1904680623-merged.mount: Deactivated successfully. Jan 15 12:50:40.624465 dockerd[2324]: time="2025-01-15T12:50:40.624406291Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 12:50:40.624605 dockerd[2324]: time="2025-01-15T12:50:40.624534250Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 15 12:50:40.624720 dockerd[2324]: time="2025-01-15T12:50:40.624694450Z" level=info msg="Daemon has completed initialization" Jan 15 12:50:40.678527 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 12:50:40.679472 dockerd[2324]: time="2025-01-15T12:50:40.679092141Z" level=info msg="API listen on /run/docker.sock" Jan 15 12:50:41.910242 containerd[1712]: time="2025-01-15T12:50:41.910194695Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 15 12:50:42.671431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071214211.mount: Deactivated successfully. Jan 15 12:50:44.362973 containerd[1712]: time="2025-01-15T12:50:44.362899655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:44.368323 containerd[1712]: time="2025-01-15T12:50:44.368262968Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Jan 15 12:50:44.371434 containerd[1712]: time="2025-01-15T12:50:44.371394124Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:44.375588 containerd[1712]: time="2025-01-15T12:50:44.375529599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:44.376780 containerd[1712]: time="2025-01-15T12:50:44.376586318Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.466341303s" Jan 15 12:50:44.376780 containerd[1712]: time="2025-01-15T12:50:44.376633038Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 15 12:50:44.377539 containerd[1712]: time="2025-01-15T12:50:44.377338637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 15 12:50:46.270582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 15 12:50:46.280279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:46.384470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:46.387688 (kubelet)[2522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:46.426820 kubelet[2522]: E0115 12:50:46.426714 2522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:46.429160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:46.429316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:50:46.729527 containerd[1712]: time="2025-01-15T12:50:46.729398125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:46.732411 containerd[1712]: time="2025-01-15T12:50:46.732371121Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Jan 15 12:50:46.735685 containerd[1712]: time="2025-01-15T12:50:46.735634277Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:46.742383 containerd[1712]: time="2025-01-15T12:50:46.742306748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:46.743660 containerd[1712]: time="2025-01-15T12:50:46.743538867Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 2.36616475s" Jan 15 12:50:46.743660 containerd[1712]: time="2025-01-15T12:50:46.743575867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 15 12:50:46.744236 containerd[1712]: time="2025-01-15T12:50:46.744194546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 15 12:50:48.618034 containerd[1712]: time="2025-01-15T12:50:48.617977876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:48.620421 containerd[1712]: time="2025-01-15T12:50:48.620175033Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Jan 15 12:50:48.623660 containerd[1712]: time="2025-01-15T12:50:48.623607589Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:48.629800 containerd[1712]: time="2025-01-15T12:50:48.629730980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:48.630945 containerd[1712]: time="2025-01-15T12:50:48.630801539Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.886571233s" Jan 15 12:50:48.630945 containerd[1712]: time="2025-01-15T12:50:48.630838659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 15 12:50:48.631586 containerd[1712]: time="2025-01-15T12:50:48.631426738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 15 12:50:49.731020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009600669.mount: Deactivated successfully. Jan 15 12:50:50.781045 containerd[1712]: time="2025-01-15T12:50:50.780990337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:50.786481 containerd[1712]: time="2025-01-15T12:50:50.786442570Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Jan 15 12:50:50.790638 containerd[1712]: time="2025-01-15T12:50:50.790588044Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:50.795252 containerd[1712]: time="2025-01-15T12:50:50.795207478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:50.795878 containerd[1712]: time="2025-01-15T12:50:50.795742757Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 2.164285339s" Jan 15 12:50:50.795878 containerd[1712]: time="2025-01-15T12:50:50.795780117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 15 12:50:50.796553 containerd[1712]: time="2025-01-15T12:50:50.796329957Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 15 12:50:51.477590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128292349.mount: Deactivated successfully. Jan 15 12:50:52.681990 containerd[1712]: time="2025-01-15T12:50:52.681918594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:52.685338 containerd[1712]: time="2025-01-15T12:50:52.685117110Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 15 12:50:52.690346 containerd[1712]: time="2025-01-15T12:50:52.690303863Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:52.696328 containerd[1712]: time="2025-01-15T12:50:52.696278695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:52.697467 containerd[1712]: time="2025-01-15T12:50:52.697326174Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.900966498s" Jan 15 12:50:52.697467 containerd[1712]: time="2025-01-15T12:50:52.697362773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 15 12:50:52.698142 containerd[1712]: time="2025-01-15T12:50:52.697973373Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 12:50:53.299977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008145884.mount: Deactivated successfully. Jan 15 12:50:53.321980 containerd[1712]: time="2025-01-15T12:50:53.321450845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:53.326885 containerd[1712]: time="2025-01-15T12:50:53.326844158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 15 12:50:53.330141 containerd[1712]: time="2025-01-15T12:50:53.330087954Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:53.336819 containerd[1712]: time="2025-01-15T12:50:53.336750585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:53.337536 containerd[1712]: time="2025-01-15T12:50:53.337408664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 639.404971ms" Jan 15 12:50:53.337536 containerd[1712]: time="2025-01-15T12:50:53.337442184Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 15 12:50:53.338224 containerd[1712]: time="2025-01-15T12:50:53.338011063Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 15 12:50:54.012810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738167497.mount: Deactivated successfully. Jan 15 12:50:55.882811 containerd[1712]: time="2025-01-15T12:50:55.882742645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:55.885181 containerd[1712]: time="2025-01-15T12:50:55.885143882Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 15 12:50:55.887949 containerd[1712]: time="2025-01-15T12:50:55.887887638Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:55.893082 containerd[1712]: time="2025-01-15T12:50:55.893011351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:50:55.894445 containerd[1712]: time="2025-01-15T12:50:55.894322910Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.556280167s" Jan 15 12:50:55.894445 containerd[1712]: time="2025-01-15T12:50:55.894358110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 15 12:50:56.520519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 15 12:50:56.531198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:50:56.628450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:50:56.632549 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 12:50:56.685068 kubelet[2659]: E0115 12:50:56.685009 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 12:50:56.686885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 12:50:56.687043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 12:51:01.594279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:01.600152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:01.622877 systemd[1]: Reloading requested from client PID 2685 ('systemctl') (unit session-9.scope)... Jan 15 12:51:01.622896 systemd[1]: Reloading... Jan 15 12:51:01.708012 zram_generator::config[2734]: No configuration found. Jan 15 12:51:01.805855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:51:01.880437 systemd[1]: Reloading finished in 257 ms. Jan 15 12:51:01.967540 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 12:51:01.967629 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 12:51:01.969001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:01.975422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:02.529249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:02.533809 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:51:02.589809 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:51:02.589809 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:51:02.589809 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:51:02.590205 kubelet[2789]: I0115 12:51:02.589887 2789 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:51:03.325320 kubelet[2789]: I0115 12:51:03.325280 2789 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 15 12:51:03.325484 kubelet[2789]: I0115 12:51:03.325474 2789 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:51:03.325775 kubelet[2789]: I0115 12:51:03.325762 2789 server.go:929] "Client rotation is on, will bootstrap in background" Jan 15 12:51:03.345832 kubelet[2789]: E0115 12:51:03.345775 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:03.346920 kubelet[2789]: I0115 12:51:03.346894 2789 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:51:03.356043 kubelet[2789]: E0115 12:51:03.356011 2789 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 15 12:51:03.356198 kubelet[2789]: I0115 12:51:03.356184 2789 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 15 12:51:03.360095 kubelet[2789]: I0115 12:51:03.360077 2789 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:51:03.360837 kubelet[2789]: I0115 12:51:03.360821 2789 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 15 12:51:03.361077 kubelet[2789]: I0115 12:51:03.361052 2789 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:51:03.361301 kubelet[2789]: I0115 12:51:03.361145 2789 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-c63c213d7c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 12:51:03.361431 kubelet[2789]: I0115 12:51:03.361419 2789 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:51:03.361479 kubelet[2789]: I0115 12:51:03.361471 2789 container_manager_linux.go:300] "Creating device plugin manager" Jan 15 12:51:03.361630 kubelet[2789]: I0115 12:51:03.361619 2789 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:51:03.364292 kubelet[2789]: I0115 12:51:03.364271 2789 kubelet.go:408] "Attempting to sync node with API server" Jan 15 12:51:03.364385 kubelet[2789]: I0115 12:51:03.364375 2789 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:51:03.364460 kubelet[2789]: I0115 12:51:03.364450 2789 kubelet.go:314] "Adding apiserver pod source" Jan 15 12:51:03.364512 kubelet[2789]: I0115 12:51:03.364503 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:51:03.366972 kubelet[2789]: W0115 12:51:03.366819 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c63c213d7c&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:03.366972 kubelet[2789]: E0115 12:51:03.366875 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c63c213d7c&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:03.367251 kubelet[2789]: W0115 12:51:03.367205 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:03.367316 kubelet[2789]: E0115 12:51:03.367251 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:03.367316 kubelet[2789]: I0115 12:51:03.367320 2789 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:51:03.368914 kubelet[2789]: I0115 12:51:03.368868 2789 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:51:03.369358 kubelet[2789]: W0115 12:51:03.369334 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 12:51:03.370787 kubelet[2789]: I0115 12:51:03.370651 2789 server.go:1269] "Started kubelet" Jan 15 12:51:03.371954 kubelet[2789]: I0115 12:51:03.371874 2789 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:51:03.372931 kubelet[2789]: I0115 12:51:03.372694 2789 server.go:460] "Adding debug handlers to kubelet server" Jan 15 12:51:03.373610 kubelet[2789]: I0115 12:51:03.373561 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:51:03.373915 kubelet[2789]: I0115 12:51:03.373898 2789 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:51:03.375041 kubelet[2789]: I0115 12:51:03.375004 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:51:03.375825 kubelet[2789]: E0115 12:51:03.374684 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-c63c213d7c.181adeb23a6ec5ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-c63c213d7c,UID:ci-4081.3.0-a-c63c213d7c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-c63c213d7c,},FirstTimestamp:2025-01-15 12:51:03.370630574 +0000 UTC m=+0.833568555,LastTimestamp:2025-01-15 12:51:03.370630574 +0000 UTC m=+0.833568555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-c63c213d7c,}" Jan 15 12:51:03.377530 kubelet[2789]: I0115 12:51:03.376660 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 12:51:03.379507 kubelet[2789]: I0115 12:51:03.379483 2789 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 15 12:51:03.379691 kubelet[2789]: E0115 12:51:03.379661 2789 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-c63c213d7c\" not found" Jan 15 12:51:03.380300 kubelet[2789]: E0115 12:51:03.380278 2789 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 12:51:03.380568 kubelet[2789]: E0115 12:51:03.380528 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c63c213d7c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Jan 15 12:51:03.382508 kubelet[2789]: I0115 12:51:03.382482 2789 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 15 12:51:03.382649 kubelet[2789]: I0115 12:51:03.382629 2789 reconciler.go:26] "Reconciler: start to sync state" Jan 15 12:51:03.383062 kubelet[2789]: W0115 12:51:03.383018 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:03.383141 kubelet[2789]: E0115 12:51:03.383070 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:03.383240 kubelet[2789]: I0115 12:51:03.383194 2789 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:51:03.383240 kubelet[2789]: I0115 12:51:03.383211 2789 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:51:03.383293 kubelet[2789]: I0115 12:51:03.383275 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:51:03.430268 kubelet[2789]: I0115 12:51:03.429869 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:51:03.431357 kubelet[2789]: I0115 12:51:03.431337 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:51:03.431449 kubelet[2789]: I0115 12:51:03.431440 2789 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:51:03.431531 kubelet[2789]: I0115 12:51:03.431523 2789 kubelet.go:2321] "Starting kubelet main sync loop" Jan 15 12:51:03.431637 kubelet[2789]: E0115 12:51:03.431620 2789 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:51:03.436154 kubelet[2789]: W0115 12:51:03.436104 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:03.436254 kubelet[2789]: E0115 12:51:03.436165 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:03.480455 kubelet[2789]: E0115 12:51:03.480422 2789 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-c63c213d7c\" not found" Jan 15 12:51:03.512308 kubelet[2789]: I0115 12:51:03.512284 2789 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:51:03.512308 kubelet[2789]: I0115 12:51:03.512299 2789 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:51:03.512308 kubelet[2789]: I0115 12:51:03.512318 2789 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:51:03.527947 kubelet[2789]: I0115 12:51:03.527900 2789 policy_none.go:49] "None policy: Start" Jan 15 12:51:03.528655 kubelet[2789]: I0115 12:51:03.528631 2789 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:51:03.528655 kubelet[2789]: I0115 12:51:03.528664 2789 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:51:03.532188 kubelet[2789]: E0115 12:51:03.532161 2789 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 12:51:03.539014 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 12:51:03.548317 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 12:51:03.551327 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 12:51:03.561725 kubelet[2789]: I0115 12:51:03.561692 2789 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:51:03.561910 kubelet[2789]: I0115 12:51:03.561892 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 12:51:03.561975 kubelet[2789]: I0115 12:51:03.561909 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 12:51:03.563364 kubelet[2789]: I0115 12:51:03.562928 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:51:03.564844 kubelet[2789]: E0115 12:51:03.564818 2789 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-c63c213d7c\" not found" Jan 15 12:51:03.581866 kubelet[2789]: E0115 12:51:03.581755 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c63c213d7c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Jan 15 12:51:03.663930 kubelet[2789]: I0115 12:51:03.663898 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.664339 kubelet[2789]: E0115 12:51:03.664262 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.742778 systemd[1]: Created slice kubepods-burstable-pod23bf9261a6120629d7f00b2d151a1f5e.slice - libcontainer container kubepods-burstable-pod23bf9261a6120629d7f00b2d151a1f5e.slice. Jan 15 12:51:03.762696 systemd[1]: Created slice kubepods-burstable-podbff70c8c34dcb3e4ab4ab08fb2309872.slice - libcontainer container kubepods-burstable-podbff70c8c34dcb3e4ab4ab08fb2309872.slice. Jan 15 12:51:03.774820 systemd[1]: Created slice kubepods-burstable-pod131bc305feb4019ffdea431f97ae18f1.slice - libcontainer container kubepods-burstable-pod131bc305feb4019ffdea431f97ae18f1.slice. Jan 15 12:51:03.785262 kubelet[2789]: I0115 12:51:03.785165 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785262 kubelet[2789]: I0115 12:51:03.785199 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23bf9261a6120629d7f00b2d151a1f5e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-c63c213d7c\" (UID: \"23bf9261a6120629d7f00b2d151a1f5e\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785262 kubelet[2789]: I0115 12:51:03.785216 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785262 kubelet[2789]: I0115 12:51:03.785239 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bff70c8c34dcb3e4ab4ab08fb2309872-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-c63c213d7c\" (UID: \"bff70c8c34dcb3e4ab4ab08fb2309872\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785262 kubelet[2789]: I0115 12:51:03.785255 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785471 kubelet[2789]: I0115 12:51:03.785270 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785471 kubelet[2789]: I0115 12:51:03.785284 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785471 kubelet[2789]: I0115 12:51:03.785299 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bff70c8c34dcb3e4ab4ab08fb2309872-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c63c213d7c\" (UID: \"bff70c8c34dcb3e4ab4ab08fb2309872\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.785471 kubelet[2789]: I0115 12:51:03.785313 2789 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bff70c8c34dcb3e4ab4ab08fb2309872-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c63c213d7c\" (UID: \"bff70c8c34dcb3e4ab4ab08fb2309872\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.867168 kubelet[2789]: I0115 12:51:03.867017 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.867687 kubelet[2789]: E0115 12:51:03.867305 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:03.982782 kubelet[2789]: E0115 12:51:03.982738 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c63c213d7c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Jan 15 12:51:04.060969 containerd[1712]: time="2025-01-15T12:51:04.060825795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-c63c213d7c,Uid:23bf9261a6120629d7f00b2d151a1f5e,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:04.072577 containerd[1712]: time="2025-01-15T12:51:04.072515700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-c63c213d7c,Uid:bff70c8c34dcb3e4ab4ab08fb2309872,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:04.078059 containerd[1712]: time="2025-01-15T12:51:04.078021013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-c63c213d7c,Uid:131bc305feb4019ffdea431f97ae18f1,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:04.269867 kubelet[2789]: I0115 12:51:04.269765 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:04.270422 kubelet[2789]: E0115 12:51:04.270393 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:04.507960 kubelet[2789]: W0115 12:51:04.507891 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c63c213d7c&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:04.508099 kubelet[2789]: E0115 12:51:04.507978 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c63c213d7c&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:04.745509 kubelet[2789]: W0115 12:51:04.745467 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:04.745880 kubelet[2789]: E0115 12:51:04.745518 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:04.763040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891261197.mount: Deactivated successfully. Jan 15 12:51:04.783310 kubelet[2789]: E0115 12:51:04.783263 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c63c213d7c?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="1.6s" Jan 15 12:51:04.807988 containerd[1712]: time="2025-01-15T12:51:04.807510818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:51:04.812336 containerd[1712]: time="2025-01-15T12:51:04.812290731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 15 12:51:04.816530 containerd[1712]: time="2025-01-15T12:51:04.816490164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:51:04.823298 containerd[1712]: time="2025-01-15T12:51:04.822493915Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:51:04.828104 containerd[1712]: time="2025-01-15T12:51:04.828020147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:51:04.833654 containerd[1712]: time="2025-01-15T12:51:04.832603020Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:51:04.836724 containerd[1712]: time="2025-01-15T12:51:04.836476055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 15 12:51:04.844607 containerd[1712]: time="2025-01-15T12:51:04.844559682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 12:51:04.846002 containerd[1712]: time="2025-01-15T12:51:04.845426441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 772.809421ms" Jan 15 12:51:04.847113 containerd[1712]: time="2025-01-15T12:51:04.847080799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 786.166284ms" Jan 15 12:51:04.848453 kubelet[2789]: W0115 12:51:04.848357 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:04.848453 kubelet[2789]: E0115 12:51:04.848428 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:04.851956 containerd[1712]: time="2025-01-15T12:51:04.851905711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 773.813418ms" Jan 15 12:51:04.872372 kubelet[2789]: W0115 12:51:04.872254 2789 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Jan 15 12:51:04.872372 kubelet[2789]: E0115 12:51:04.872323 2789 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:05.073038 kubelet[2789]: I0115 12:51:05.072992 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:05.073537 kubelet[2789]: E0115 12:51:05.073502 2789 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:05.500511 containerd[1712]: time="2025-01-15T12:51:05.500110064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:05.500511 containerd[1712]: time="2025-01-15T12:51:05.500203024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:05.500511 containerd[1712]: time="2025-01-15T12:51:05.500226943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.500511 containerd[1712]: time="2025-01-15T12:51:05.500317663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.507120 containerd[1712]: time="2025-01-15T12:51:05.506691374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:05.507120 containerd[1712]: time="2025-01-15T12:51:05.506757454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:05.507120 containerd[1712]: time="2025-01-15T12:51:05.506788574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.507120 containerd[1712]: time="2025-01-15T12:51:05.506884174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.509755 containerd[1712]: time="2025-01-15T12:51:05.507716892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:05.509755 containerd[1712]: time="2025-01-15T12:51:05.507758532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:05.509755 containerd[1712]: time="2025-01-15T12:51:05.507769572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.509755 containerd[1712]: time="2025-01-15T12:51:05.507844852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:05.540146 kubelet[2789]: E0115 12:51:05.540105 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Jan 15 12:51:05.550109 systemd[1]: Started cri-containerd-1eb7243fb621453dd324111e7c08eb43bf4019f311360cd68acfc96d5a68e2ed.scope - libcontainer container 1eb7243fb621453dd324111e7c08eb43bf4019f311360cd68acfc96d5a68e2ed. Jan 15 12:51:05.552004 systemd[1]: Started cri-containerd-38e587fb21d0a8fefaab7a5938d3a564e397b479e1d0010215f3fbff6a62e351.scope - libcontainer container 38e587fb21d0a8fefaab7a5938d3a564e397b479e1d0010215f3fbff6a62e351. Jan 15 12:51:05.553462 systemd[1]: Started cri-containerd-ba15bbcd9c236da521d7d251d1cca01b44dab1f9ad994bfd551f6ce4fd0d2e41.scope - libcontainer container ba15bbcd9c236da521d7d251d1cca01b44dab1f9ad994bfd551f6ce4fd0d2e41. Jan 15 12:51:05.601644 containerd[1712]: time="2025-01-15T12:51:05.601487432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-c63c213d7c,Uid:bff70c8c34dcb3e4ab4ab08fb2309872,Namespace:kube-system,Attempt:0,} returns sandbox id \"38e587fb21d0a8fefaab7a5938d3a564e397b479e1d0010215f3fbff6a62e351\"" Jan 15 12:51:05.604129 containerd[1712]: time="2025-01-15T12:51:05.603594509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-c63c213d7c,Uid:23bf9261a6120629d7f00b2d151a1f5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eb7243fb621453dd324111e7c08eb43bf4019f311360cd68acfc96d5a68e2ed\"" Jan 15 12:51:05.610543 containerd[1712]: time="2025-01-15T12:51:05.610421619Z" level=info msg="CreateContainer within sandbox \"1eb7243fb621453dd324111e7c08eb43bf4019f311360cd68acfc96d5a68e2ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 12:51:05.610818 containerd[1712]: time="2025-01-15T12:51:05.610757418Z" level=info msg="CreateContainer within sandbox \"38e587fb21d0a8fefaab7a5938d3a564e397b479e1d0010215f3fbff6a62e351\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 12:51:05.618473 containerd[1712]: time="2025-01-15T12:51:05.618354887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-c63c213d7c,Uid:131bc305feb4019ffdea431f97ae18f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba15bbcd9c236da521d7d251d1cca01b44dab1f9ad994bfd551f6ce4fd0d2e41\"" Jan 15 12:51:05.621432 containerd[1712]: time="2025-01-15T12:51:05.621253603Z" level=info msg="CreateContainer within sandbox \"ba15bbcd9c236da521d7d251d1cca01b44dab1f9ad994bfd551f6ce4fd0d2e41\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 12:51:05.675766 containerd[1712]: time="2025-01-15T12:51:05.675588562Z" level=info msg="CreateContainer within sandbox \"38e587fb21d0a8fefaab7a5938d3a564e397b479e1d0010215f3fbff6a62e351\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3f98090f98ce1942f6c465586641ffc5b7f84c3ef48dfbaae1baf62c94a074b\"" Jan 15 12:51:05.676402 containerd[1712]: time="2025-01-15T12:51:05.676370681Z" level=info msg="StartContainer for \"a3f98090f98ce1942f6c465586641ffc5b7f84c3ef48dfbaae1baf62c94a074b\"" Jan 15 12:51:05.683982 containerd[1712]: time="2025-01-15T12:51:05.683830309Z" level=info msg="CreateContainer within sandbox \"ba15bbcd9c236da521d7d251d1cca01b44dab1f9ad994bfd551f6ce4fd0d2e41\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e21e4b7bb4aae631760a782d99f361479dad5279fca7f8661b5607fbd1f44fa\"" Jan 15 12:51:05.684430 containerd[1712]: time="2025-01-15T12:51:05.684400309Z" level=info msg="StartContainer for \"0e21e4b7bb4aae631760a782d99f361479dad5279fca7f8661b5607fbd1f44fa\"" Jan 15 12:51:05.693460 containerd[1712]: time="2025-01-15T12:51:05.693428655Z" level=info msg="CreateContainer within sandbox \"1eb7243fb621453dd324111e7c08eb43bf4019f311360cd68acfc96d5a68e2ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d134175e5dc2fddaba3efb3fcdf79c7d1f2e7ad20c1a0981bb9e9ac3fcff8506\"" Jan 15 12:51:05.695981 containerd[1712]: time="2025-01-15T12:51:05.694141134Z" level=info msg="StartContainer for \"d134175e5dc2fddaba3efb3fcdf79c7d1f2e7ad20c1a0981bb9e9ac3fcff8506\"" Jan 15 12:51:05.705297 systemd[1]: Started cri-containerd-a3f98090f98ce1942f6c465586641ffc5b7f84c3ef48dfbaae1baf62c94a074b.scope - libcontainer container a3f98090f98ce1942f6c465586641ffc5b7f84c3ef48dfbaae1baf62c94a074b. Jan 15 12:51:05.713087 systemd[1]: Started cri-containerd-0e21e4b7bb4aae631760a782d99f361479dad5279fca7f8661b5607fbd1f44fa.scope - libcontainer container 0e21e4b7bb4aae631760a782d99f361479dad5279fca7f8661b5607fbd1f44fa. Jan 15 12:51:05.736195 systemd[1]: Started cri-containerd-d134175e5dc2fddaba3efb3fcdf79c7d1f2e7ad20c1a0981bb9e9ac3fcff8506.scope - libcontainer container d134175e5dc2fddaba3efb3fcdf79c7d1f2e7ad20c1a0981bb9e9ac3fcff8506. Jan 15 12:51:05.784813 containerd[1712]: time="2025-01-15T12:51:05.784771279Z" level=info msg="StartContainer for \"a3f98090f98ce1942f6c465586641ffc5b7f84c3ef48dfbaae1baf62c94a074b\" returns successfully" Jan 15 12:51:05.785055 containerd[1712]: time="2025-01-15T12:51:05.785034278Z" level=info msg="StartContainer for \"0e21e4b7bb4aae631760a782d99f361479dad5279fca7f8661b5607fbd1f44fa\" returns successfully" Jan 15 12:51:05.814222 containerd[1712]: time="2025-01-15T12:51:05.814171875Z" level=info msg="StartContainer for \"d134175e5dc2fddaba3efb3fcdf79c7d1f2e7ad20c1a0981bb9e9ac3fcff8506\" returns successfully" Jan 15 12:51:06.675680 kubelet[2789]: I0115 12:51:06.675647 2789 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:07.305959 kubelet[2789]: E0115 12:51:07.305910 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-c63c213d7c\" not found" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:07.510050 kubelet[2789]: I0115 12:51:07.509445 2789 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:07.510160 kubelet[2789]: E0115 12:51:07.510078 2789 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.0-a-c63c213d7c\": node \"ci-4081.3.0-a-c63c213d7c\" not found" Jan 15 12:51:08.370638 kubelet[2789]: I0115 12:51:08.370569 2789 apiserver.go:52] "Watching apiserver" Jan 15 12:51:08.383508 kubelet[2789]: I0115 12:51:08.383462 2789 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 15 12:51:09.508058 systemd[1]: Reloading requested from client PID 3060 ('systemctl') (unit session-9.scope)... Jan 15 12:51:09.508074 systemd[1]: Reloading... Jan 15 12:51:09.585992 zram_generator::config[3096]: No configuration found. Jan 15 12:51:09.694704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 15 12:51:09.788446 systemd[1]: Reloading finished in 280 ms. Jan 15 12:51:09.825810 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:09.827894 kubelet[2789]: I0115 12:51:09.826370 2789 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:51:09.827894 kubelet[2789]: E0115 12:51:09.826566 2789 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.0-a-c63c213d7c.181adeb23a6ec5ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-c63c213d7c,UID:ci-4081.3.0-a-c63c213d7c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-c63c213d7c,},FirstTimestamp:2025-01-15 12:51:03.370630574 +0000 UTC m=+0.833568555,LastTimestamp:2025-01-15 12:51:03.370630574 +0000 UTC m=+0.833568555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-c63c213d7c,}" Jan 15 12:51:09.842421 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 12:51:09.842770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:09.843408 systemd[1]: kubelet.service: Consumed 1.118s CPU time, 117.3M memory peak, 0B memory swap peak. Jan 15 12:51:09.853473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 12:51:09.969350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 12:51:09.979248 (kubelet)[3164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 12:51:10.021871 kubelet[3164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:51:10.021871 kubelet[3164]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 15 12:51:10.021871 kubelet[3164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 12:51:10.022220 kubelet[3164]: I0115 12:51:10.021923 3164 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 12:51:10.026904 kubelet[3164]: I0115 12:51:10.026864 3164 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 15 12:51:10.026904 kubelet[3164]: I0115 12:51:10.026892 3164 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 12:51:10.027159 kubelet[3164]: I0115 12:51:10.027138 3164 server.go:929] "Client rotation is on, will bootstrap in background" Jan 15 12:51:10.028553 kubelet[3164]: I0115 12:51:10.028533 3164 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 12:51:10.030782 kubelet[3164]: I0115 12:51:10.030629 3164 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 12:51:10.034290 kubelet[3164]: E0115 12:51:10.033997 3164 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 15 12:51:10.034290 kubelet[3164]: I0115 12:51:10.034056 3164 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 15 12:51:10.044250 kubelet[3164]: I0115 12:51:10.044156 3164 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 12:51:10.044446 kubelet[3164]: I0115 12:51:10.044434 3164 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 15 12:51:10.044624 kubelet[3164]: I0115 12:51:10.044597 3164 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 12:51:10.044834 kubelet[3164]: I0115 12:51:10.044682 3164 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-c63c213d7c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 12:51:10.045127 kubelet[3164]: I0115 12:51:10.044959 3164 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 12:51:10.045127 kubelet[3164]: I0115 12:51:10.044975 3164 container_manager_linux.go:300] "Creating device plugin manager" Jan 15 12:51:10.045127 kubelet[3164]: I0115 12:51:10.045011 3164 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:51:10.045262 kubelet[3164]: I0115 12:51:10.045251 3164 kubelet.go:408] "Attempting to sync node with API server" Jan 15 12:51:10.045822 kubelet[3164]: I0115 12:51:10.045754 3164 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 12:51:10.046216 kubelet[3164]: I0115 12:51:10.045969 3164 kubelet.go:314] "Adding apiserver pod source" Jan 15 12:51:10.046216 kubelet[3164]: I0115 12:51:10.045986 3164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 12:51:10.047479 kubelet[3164]: I0115 12:51:10.047458 3164 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 15 12:51:10.047950 kubelet[3164]: I0115 12:51:10.047916 3164 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 12:51:10.049914 kubelet[3164]: I0115 12:51:10.049659 3164 server.go:1269] "Started kubelet" Jan 15 12:51:10.058928 kubelet[3164]: I0115 12:51:10.049839 3164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 12:51:10.058928 kubelet[3164]: I0115 12:51:10.052767 3164 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 12:51:10.058928 kubelet[3164]: I0115 12:51:10.053088 3164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 12:51:10.058928 kubelet[3164]: I0115 12:51:10.053564 3164 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 12:51:10.058928 kubelet[3164]: I0115 12:51:10.055402 3164 server.go:460] "Adding debug handlers to kubelet server" Jan 15 12:51:10.058928 kubelet[3164]: I0115 12:51:10.056634 3164 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 12:51:10.058928 kubelet[3164]: I0115 12:51:10.058427 3164 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 15 12:51:10.058928 kubelet[3164]: E0115 12:51:10.058666 3164 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-c63c213d7c\" not found" Jan 15 12:51:10.059306 kubelet[3164]: I0115 12:51:10.059275 3164 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 15 12:51:10.059676 kubelet[3164]: I0115 12:51:10.059661 3164 reconciler.go:26] "Reconciler: start to sync state" Jan 15 12:51:10.066366 kubelet[3164]: I0115 12:51:10.066310 3164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 12:51:10.067950 kubelet[3164]: I0115 12:51:10.067134 3164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 12:51:10.067950 kubelet[3164]: I0115 12:51:10.067159 3164 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 15 12:51:10.067950 kubelet[3164]: I0115 12:51:10.067178 3164 kubelet.go:2321] "Starting kubelet main sync loop" Jan 15 12:51:10.067950 kubelet[3164]: E0115 12:51:10.067216 3164 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 12:51:10.073949 kubelet[3164]: I0115 12:51:10.073481 3164 factory.go:221] Registration of the systemd container factory successfully Jan 15 12:51:10.073949 kubelet[3164]: I0115 12:51:10.073588 3164 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 12:51:10.077959 kubelet[3164]: I0115 12:51:10.076352 3164 factory.go:221] Registration of the containerd container factory successfully Jan 15 12:51:10.118891 kubelet[3164]: E0115 12:51:10.118848 3164 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 12:51:10.146773 kubelet[3164]: I0115 12:51:10.146752 3164 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 15 12:51:10.146923 kubelet[3164]: I0115 12:51:10.146910 3164 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 15 12:51:10.147020 kubelet[3164]: I0115 12:51:10.147010 3164 state_mem.go:36] "Initialized new in-memory state store" Jan 15 12:51:10.147210 kubelet[3164]: I0115 12:51:10.147196 3164 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 12:51:10.147284 kubelet[3164]: I0115 12:51:10.147261 3164 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 12:51:10.147334 kubelet[3164]: I0115 12:51:10.147327 3164 policy_none.go:49] "None policy: Start" Jan 15 12:51:10.148138 kubelet[3164]: I0115 12:51:10.148125 3164 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 15 12:51:10.148239 kubelet[3164]: I0115 12:51:10.148228 3164 state_mem.go:35] "Initializing new in-memory state store" Jan 15 12:51:10.148427 kubelet[3164]: I0115 12:51:10.148416 3164 state_mem.go:75] "Updated machine memory state" Jan 15 12:51:10.152175 kubelet[3164]: I0115 12:51:10.152159 3164 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 12:51:10.152399 kubelet[3164]: I0115 12:51:10.152385 3164 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 12:51:10.152488 kubelet[3164]: I0115 12:51:10.152461 3164 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 12:51:10.152720 kubelet[3164]: I0115 12:51:10.152702 3164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 12:51:10.177611 kubelet[3164]: W0115 12:51:10.177367 3164 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:51:10.180572 kubelet[3164]: W0115 12:51:10.180539 3164 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:51:10.181295 kubelet[3164]: W0115 12:51:10.180972 3164 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 15 12:51:10.258493 kubelet[3164]: I0115 12:51:10.258241 3164 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.259911 kubelet[3164]: I0115 12:51:10.259874 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bff70c8c34dcb3e4ab4ab08fb2309872-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c63c213d7c\" (UID: \"bff70c8c34dcb3e4ab4ab08fb2309872\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260171 kubelet[3164]: I0115 12:51:10.260080 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bff70c8c34dcb3e4ab4ab08fb2309872-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c63c213d7c\" (UID: \"bff70c8c34dcb3e4ab4ab08fb2309872\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260171 kubelet[3164]: I0115 12:51:10.260124 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bff70c8c34dcb3e4ab4ab08fb2309872-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-c63c213d7c\" (UID: \"bff70c8c34dcb3e4ab4ab08fb2309872\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260171 kubelet[3164]: I0115 12:51:10.260144 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260388 kubelet[3164]: I0115 12:51:10.260283 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260388 kubelet[3164]: I0115 12:51:10.260306 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23bf9261a6120629d7f00b2d151a1f5e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-c63c213d7c\" (UID: \"23bf9261a6120629d7f00b2d151a1f5e\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260550 kubelet[3164]: I0115 12:51:10.260426 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260550 kubelet[3164]: I0115 12:51:10.260449 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.260550 kubelet[3164]: I0115 12:51:10.260465 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/131bc305feb4019ffdea431f97ae18f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-c63c213d7c\" (UID: \"131bc305feb4019ffdea431f97ae18f1\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.269230 kubelet[3164]: I0115 12:51:10.269151 3164 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:10.269230 kubelet[3164]: I0115 12:51:10.269219 3164 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:11.046694 kubelet[3164]: I0115 12:51:11.046601 3164 apiserver.go:52] "Watching apiserver" Jan 15 12:51:11.059819 kubelet[3164]: I0115 12:51:11.059772 3164 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 15 12:51:11.201233 kubelet[3164]: I0115 12:51:11.201172 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-c63c213d7c" podStartSLOduration=1.201153752 podStartE2EDuration="1.201153752s" podCreationTimestamp="2025-01-15 12:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:11.184052777 +0000 UTC m=+1.201746006" watchObservedRunningTime="2025-01-15 12:51:11.201153752 +0000 UTC m=+1.218846981" Jan 15 12:51:11.214976 kubelet[3164]: I0115 12:51:11.214909 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-c63c213d7c" podStartSLOduration=1.214890371 podStartE2EDuration="1.214890371s" podCreationTimestamp="2025-01-15 12:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:11.20219059 +0000 UTC m=+1.219883859" watchObservedRunningTime="2025-01-15 12:51:11.214890371 +0000 UTC m=+1.232583600" Jan 15 12:51:11.215689 kubelet[3164]: I0115 12:51:11.215232 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c63c213d7c" podStartSLOduration=1.215222011 podStartE2EDuration="1.215222011s" podCreationTimestamp="2025-01-15 12:51:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:11.214215492 +0000 UTC m=+1.231908721" watchObservedRunningTime="2025-01-15 12:51:11.215222011 +0000 UTC m=+1.232915240" Jan 15 12:51:14.319181 kubelet[3164]: I0115 12:51:14.319142 3164 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 12:51:14.319518 containerd[1712]: time="2025-01-15T12:51:14.319471127Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 12:51:14.319690 kubelet[3164]: I0115 12:51:14.319629 3164 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 12:51:15.175786 systemd[1]: Created slice kubepods-besteffort-podeef5c080_2c73_468a_9405_d5ef4c7a2e7c.slice - libcontainer container kubepods-besteffort-podeef5c080_2c73_468a_9405_d5ef4c7a2e7c.slice. Jan 15 12:51:15.189878 kubelet[3164]: I0115 12:51:15.189488 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd72g\" (UniqueName: \"kubernetes.io/projected/eef5c080-2c73-468a-9405-d5ef4c7a2e7c-kube-api-access-kd72g\") pod \"kube-proxy-hwkvt\" (UID: \"eef5c080-2c73-468a-9405-d5ef4c7a2e7c\") " pod="kube-system/kube-proxy-hwkvt" Jan 15 12:51:15.189878 kubelet[3164]: I0115 12:51:15.189528 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eef5c080-2c73-468a-9405-d5ef4c7a2e7c-kube-proxy\") pod \"kube-proxy-hwkvt\" (UID: \"eef5c080-2c73-468a-9405-d5ef4c7a2e7c\") " pod="kube-system/kube-proxy-hwkvt" Jan 15 12:51:15.189878 kubelet[3164]: I0115 12:51:15.189547 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eef5c080-2c73-468a-9405-d5ef4c7a2e7c-xtables-lock\") pod \"kube-proxy-hwkvt\" (UID: \"eef5c080-2c73-468a-9405-d5ef4c7a2e7c\") " pod="kube-system/kube-proxy-hwkvt" Jan 15 12:51:15.189878 kubelet[3164]: I0115 12:51:15.189563 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eef5c080-2c73-468a-9405-d5ef4c7a2e7c-lib-modules\") pod \"kube-proxy-hwkvt\" (UID: \"eef5c080-2c73-468a-9405-d5ef4c7a2e7c\") " pod="kube-system/kube-proxy-hwkvt" Jan 15 12:51:15.312067 sudo[2245]: pam_unix(sudo:session): session closed for user root Jan 15 12:51:15.397203 sshd[2240]: pam_unix(sshd:session): session closed for user core Jan 15 12:51:15.405446 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:45784.service: Deactivated successfully. Jan 15 12:51:15.411758 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 12:51:15.412229 systemd[1]: session-9.scope: Consumed 6.683s CPU time, 153.8M memory peak, 0B memory swap peak. Jan 15 12:51:15.415608 systemd-logind[1685]: Session 9 logged out. Waiting for processes to exit. Jan 15 12:51:15.418617 systemd[1]: Created slice kubepods-besteffort-pod02b6af6e_4f4c_45c3_8ced_cee2aa45be40.slice - libcontainer container kubepods-besteffort-pod02b6af6e_4f4c_45c3_8ced_cee2aa45be40.slice. Jan 15 12:51:15.419898 systemd-logind[1685]: Removed session 9. Jan 15 12:51:15.486184 containerd[1712]: time="2025-01-15T12:51:15.486078355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hwkvt,Uid:eef5c080-2c73-468a-9405-d5ef4c7a2e7c,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:15.527889 containerd[1712]: time="2025-01-15T12:51:15.527787097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:15.527889 containerd[1712]: time="2025-01-15T12:51:15.527841057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:15.527889 containerd[1712]: time="2025-01-15T12:51:15.527868737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:15.529107 containerd[1712]: time="2025-01-15T12:51:15.528001497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:15.547116 systemd[1]: Started cri-containerd-798f575731925f9a5f7c360cc64d6b4e03834c0974f1416530d91c0e641cd103.scope - libcontainer container 798f575731925f9a5f7c360cc64d6b4e03834c0974f1416530d91c0e641cd103. Jan 15 12:51:15.568111 containerd[1712]: time="2025-01-15T12:51:15.568045921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hwkvt,Uid:eef5c080-2c73-468a-9405-d5ef4c7a2e7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"798f575731925f9a5f7c360cc64d6b4e03834c0974f1416530d91c0e641cd103\"" Jan 15 12:51:15.571920 containerd[1712]: time="2025-01-15T12:51:15.571864156Z" level=info msg="CreateContainer within sandbox \"798f575731925f9a5f7c360cc64d6b4e03834c0974f1416530d91c0e641cd103\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 12:51:15.591597 kubelet[3164]: I0115 12:51:15.591549 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcft2\" (UniqueName: \"kubernetes.io/projected/02b6af6e-4f4c-45c3-8ced-cee2aa45be40-kube-api-access-gcft2\") pod \"tigera-operator-76c4976dd7-szpbl\" (UID: \"02b6af6e-4f4c-45c3-8ced-cee2aa45be40\") " pod="tigera-operator/tigera-operator-76c4976dd7-szpbl" Jan 15 12:51:15.591597 kubelet[3164]: I0115 12:51:15.591603 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02b6af6e-4f4c-45c3-8ced-cee2aa45be40-var-lib-calico\") pod \"tigera-operator-76c4976dd7-szpbl\" (UID: \"02b6af6e-4f4c-45c3-8ced-cee2aa45be40\") " pod="tigera-operator/tigera-operator-76c4976dd7-szpbl" Jan 15 12:51:15.613928 containerd[1712]: time="2025-01-15T12:51:15.613792658Z" level=info msg="CreateContainer within sandbox \"798f575731925f9a5f7c360cc64d6b4e03834c0974f1416530d91c0e641cd103\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2bcf0dab4f336c94c7dc7c87a19ffcb97f85d51ba75e84fdc046c2ad74e2a12\"" Jan 15 12:51:15.614888 containerd[1712]: time="2025-01-15T12:51:15.614775617Z" level=info msg="StartContainer for \"c2bcf0dab4f336c94c7dc7c87a19ffcb97f85d51ba75e84fdc046c2ad74e2a12\"" Jan 15 12:51:15.641123 systemd[1]: Started cri-containerd-c2bcf0dab4f336c94c7dc7c87a19ffcb97f85d51ba75e84fdc046c2ad74e2a12.scope - libcontainer container c2bcf0dab4f336c94c7dc7c87a19ffcb97f85d51ba75e84fdc046c2ad74e2a12. Jan 15 12:51:15.669628 containerd[1712]: time="2025-01-15T12:51:15.669521941Z" level=info msg="StartContainer for \"c2bcf0dab4f336c94c7dc7c87a19ffcb97f85d51ba75e84fdc046c2ad74e2a12\" returns successfully" Jan 15 12:51:15.723550 containerd[1712]: time="2025-01-15T12:51:15.723419387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-szpbl,Uid:02b6af6e-4f4c-45c3-8ced-cee2aa45be40,Namespace:tigera-operator,Attempt:0,}" Jan 15 12:51:15.776126 containerd[1712]: time="2025-01-15T12:51:15.775563914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:15.776126 containerd[1712]: time="2025-01-15T12:51:15.775652354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:15.776126 containerd[1712]: time="2025-01-15T12:51:15.775666914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:15.776126 containerd[1712]: time="2025-01-15T12:51:15.775749554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:15.804167 systemd[1]: Started cri-containerd-d22b45a2f35dd446272906680ec6507182e3be0fa9018e58d49357e916ed8b6e.scope - libcontainer container d22b45a2f35dd446272906680ec6507182e3be0fa9018e58d49357e916ed8b6e. Jan 15 12:51:15.832515 containerd[1712]: time="2025-01-15T12:51:15.832474156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-szpbl,Uid:02b6af6e-4f4c-45c3-8ced-cee2aa45be40,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d22b45a2f35dd446272906680ec6507182e3be0fa9018e58d49357e916ed8b6e\"" Jan 15 12:51:15.834831 containerd[1712]: time="2025-01-15T12:51:15.834754313Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 15 12:51:16.154367 kubelet[3164]: I0115 12:51:16.154130 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hwkvt" podStartSLOduration=1.154112231 podStartE2EDuration="1.154112231s" podCreationTimestamp="2025-01-15 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:16.153930791 +0000 UTC m=+6.171624020" watchObservedRunningTime="2025-01-15 12:51:16.154112231 +0000 UTC m=+6.171805460" Jan 15 12:51:17.693758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633510380.mount: Deactivated successfully. Jan 15 12:51:19.794977 containerd[1712]: time="2025-01-15T12:51:19.794864839Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:19.797546 containerd[1712]: time="2025-01-15T12:51:19.797507555Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125376" Jan 15 12:51:19.809432 containerd[1712]: time="2025-01-15T12:51:19.809359819Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:19.813663 containerd[1712]: time="2025-01-15T12:51:19.813622253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:19.814846 containerd[1712]: time="2025-01-15T12:51:19.814305732Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.979509739s" Jan 15 12:51:19.814846 containerd[1712]: time="2025-01-15T12:51:19.814347012Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 15 12:51:19.817709 containerd[1712]: time="2025-01-15T12:51:19.817667447Z" level=info msg="CreateContainer within sandbox \"d22b45a2f35dd446272906680ec6507182e3be0fa9018e58d49357e916ed8b6e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 12:51:19.843261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137130532.mount: Deactivated successfully. Jan 15 12:51:19.857435 containerd[1712]: time="2025-01-15T12:51:19.857335472Z" level=info msg="CreateContainer within sandbox \"d22b45a2f35dd446272906680ec6507182e3be0fa9018e58d49357e916ed8b6e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2783539c9be3d4bfb70adea9e7fddae1b7d7584c573b24f67ac5fc93ac7e2e4c\"" Jan 15 12:51:19.858622 containerd[1712]: time="2025-01-15T12:51:19.857838952Z" level=info msg="StartContainer for \"2783539c9be3d4bfb70adea9e7fddae1b7d7584c573b24f67ac5fc93ac7e2e4c\"" Jan 15 12:51:19.891140 systemd[1]: Started cri-containerd-2783539c9be3d4bfb70adea9e7fddae1b7d7584c573b24f67ac5fc93ac7e2e4c.scope - libcontainer container 2783539c9be3d4bfb70adea9e7fddae1b7d7584c573b24f67ac5fc93ac7e2e4c. Jan 15 12:51:19.916974 containerd[1712]: time="2025-01-15T12:51:19.916766310Z" level=info msg="StartContainer for \"2783539c9be3d4bfb70adea9e7fddae1b7d7584c573b24f67ac5fc93ac7e2e4c\" returns successfully" Jan 15 12:51:21.300597 kubelet[3164]: I0115 12:51:21.300335 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-szpbl" podStartSLOduration=2.319244828 podStartE2EDuration="6.300316005s" podCreationTimestamp="2025-01-15 12:51:15 +0000 UTC" firstStartedPulling="2025-01-15 12:51:15.834051434 +0000 UTC m=+5.851744623" lastFinishedPulling="2025-01-15 12:51:19.815122571 +0000 UTC m=+9.832815800" observedRunningTime="2025-01-15 12:51:20.163645169 +0000 UTC m=+10.181338398" watchObservedRunningTime="2025-01-15 12:51:21.300316005 +0000 UTC m=+11.318009234" Jan 15 12:51:23.996354 systemd[1]: Created slice kubepods-besteffort-podfeca1961_1a00_4ffc_a412_2de4c1ed1bdf.slice - libcontainer container kubepods-besteffort-podfeca1961_1a00_4ffc_a412_2de4c1ed1bdf.slice. Jan 15 12:51:24.087434 systemd[1]: Created slice kubepods-besteffort-podf39ecac4_4d02_4b5e_904d_e0388b990a66.slice - libcontainer container kubepods-besteffort-podf39ecac4_4d02_4b5e_904d_e0388b990a66.slice. Jan 15 12:51:24.142623 kubelet[3164]: I0115 12:51:24.142483 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4t4g\" (UniqueName: \"kubernetes.io/projected/feca1961-1a00-4ffc-a412-2de4c1ed1bdf-kube-api-access-n4t4g\") pod \"calico-typha-d969cffbf-jvsvr\" (UID: \"feca1961-1a00-4ffc-a412-2de4c1ed1bdf\") " pod="calico-system/calico-typha-d969cffbf-jvsvr" Jan 15 12:51:24.142623 kubelet[3164]: I0115 12:51:24.142553 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feca1961-1a00-4ffc-a412-2de4c1ed1bdf-tigera-ca-bundle\") pod \"calico-typha-d969cffbf-jvsvr\" (UID: \"feca1961-1a00-4ffc-a412-2de4c1ed1bdf\") " pod="calico-system/calico-typha-d969cffbf-jvsvr" Jan 15 12:51:24.142623 kubelet[3164]: I0115 12:51:24.142572 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/feca1961-1a00-4ffc-a412-2de4c1ed1bdf-typha-certs\") pod \"calico-typha-d969cffbf-jvsvr\" (UID: \"feca1961-1a00-4ffc-a412-2de4c1ed1bdf\") " pod="calico-system/calico-typha-d969cffbf-jvsvr" Jan 15 12:51:24.215025 kubelet[3164]: E0115 12:51:24.214961 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwl5w" podUID="78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc" Jan 15 12:51:24.243075 kubelet[3164]: I0115 12:51:24.243026 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-cni-net-dir\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243075 kubelet[3164]: I0115 12:51:24.243074 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f39ecac4-4d02-4b5e-904d-e0388b990a66-tigera-ca-bundle\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243237 kubelet[3164]: I0115 12:51:24.243099 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-var-run-calico\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243237 kubelet[3164]: I0115 12:51:24.243116 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-cni-log-dir\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243237 kubelet[3164]: I0115 12:51:24.243132 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-xtables-lock\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243237 kubelet[3164]: I0115 12:51:24.243146 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-policysync\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243237 kubelet[3164]: I0115 12:51:24.243172 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-cni-bin-dir\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243351 kubelet[3164]: I0115 12:51:24.243188 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-lib-modules\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243351 kubelet[3164]: I0115 12:51:24.243223 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-flexvol-driver-host\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243351 kubelet[3164]: I0115 12:51:24.243243 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f39ecac4-4d02-4b5e-904d-e0388b990a66-node-certs\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243351 kubelet[3164]: I0115 12:51:24.243257 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f39ecac4-4d02-4b5e-904d-e0388b990a66-var-lib-calico\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.243351 kubelet[3164]: I0115 12:51:24.243273 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdq4f\" (UniqueName: \"kubernetes.io/projected/f39ecac4-4d02-4b5e-904d-e0388b990a66-kube-api-access-gdq4f\") pod \"calico-node-9txkr\" (UID: \"f39ecac4-4d02-4b5e-904d-e0388b990a66\") " pod="calico-system/calico-node-9txkr" Jan 15 12:51:24.303673 containerd[1712]: time="2025-01-15T12:51:24.303498045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d969cffbf-jvsvr,Uid:feca1961-1a00-4ffc-a412-2de4c1ed1bdf,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:24.344459 kubelet[3164]: I0115 12:51:24.344416 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc-kubelet-dir\") pod \"csi-node-driver-rwl5w\" (UID: \"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc\") " pod="calico-system/csi-node-driver-rwl5w" Jan 15 12:51:24.344459 kubelet[3164]: I0115 12:51:24.344461 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbp59\" (UniqueName: \"kubernetes.io/projected/78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc-kube-api-access-cbp59\") pod \"csi-node-driver-rwl5w\" (UID: \"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc\") " pod="calico-system/csi-node-driver-rwl5w" Jan 15 12:51:24.344617 kubelet[3164]: I0115 12:51:24.344489 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc-socket-dir\") pod \"csi-node-driver-rwl5w\" (UID: \"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc\") " pod="calico-system/csi-node-driver-rwl5w" Jan 15 12:51:24.344617 kubelet[3164]: I0115 12:51:24.344540 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc-registration-dir\") pod \"csi-node-driver-rwl5w\" (UID: \"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc\") " pod="calico-system/csi-node-driver-rwl5w" Jan 15 12:51:24.345183 kubelet[3164]: I0115 12:51:24.344726 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc-varrun\") pod \"csi-node-driver-rwl5w\" (UID: \"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc\") " pod="calico-system/csi-node-driver-rwl5w" Jan 15 12:51:24.346805 kubelet[3164]: E0115 12:51:24.346593 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.347162 kubelet[3164]: W0115 12:51:24.347002 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.347162 kubelet[3164]: E0115 12:51:24.347038 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.349150 kubelet[3164]: E0115 12:51:24.348988 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.349150 kubelet[3164]: W0115 12:51:24.349106 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.349150 kubelet[3164]: E0115 12:51:24.349125 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.350993 kubelet[3164]: E0115 12:51:24.350974 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.350993 kubelet[3164]: W0115 12:51:24.351019 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.351506 kubelet[3164]: E0115 12:51:24.351439 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.352075 kubelet[3164]: E0115 12:51:24.352025 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.352075 kubelet[3164]: W0115 12:51:24.352036 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.352806 kubelet[3164]: E0115 12:51:24.352631 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.353278 kubelet[3164]: E0115 12:51:24.353066 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.353278 kubelet[3164]: W0115 12:51:24.353225 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.353656 kubelet[3164]: E0115 12:51:24.353539 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.354244 kubelet[3164]: E0115 12:51:24.354002 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.354244 kubelet[3164]: W0115 12:51:24.354016 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.354666 kubelet[3164]: E0115 12:51:24.354431 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.354985 kubelet[3164]: E0115 12:51:24.354881 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.354985 kubelet[3164]: W0115 12:51:24.354894 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.354985 kubelet[3164]: E0115 12:51:24.354925 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.355860 kubelet[3164]: E0115 12:51:24.355828 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.355860 kubelet[3164]: W0115 12:51:24.355845 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.356036 kubelet[3164]: E0115 12:51:24.356009 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.357136 kubelet[3164]: E0115 12:51:24.357107 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.357136 kubelet[3164]: W0115 12:51:24.357128 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.360920 kubelet[3164]: E0115 12:51:24.360640 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.361499 kubelet[3164]: E0115 12:51:24.361123 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.361499 kubelet[3164]: W0115 12:51:24.361140 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.361605 kubelet[3164]: E0115 12:51:24.361577 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.361605 kubelet[3164]: W0115 12:51:24.361588 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.361970 kubelet[3164]: E0115 12:51:24.361953 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.362046 kubelet[3164]: E0115 12:51:24.362035 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.364459 kubelet[3164]: E0115 12:51:24.364347 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.364459 kubelet[3164]: W0115 12:51:24.364366 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.365512 containerd[1712]: time="2025-01-15T12:51:24.365057413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:24.365512 containerd[1712]: time="2025-01-15T12:51:24.365129653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:24.365512 containerd[1712]: time="2025-01-15T12:51:24.365155853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:24.365512 containerd[1712]: time="2025-01-15T12:51:24.365230332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:24.365677 kubelet[3164]: E0115 12:51:24.365295 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.365677 kubelet[3164]: E0115 12:51:24.365354 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.365677 kubelet[3164]: W0115 12:51:24.365362 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.365677 kubelet[3164]: E0115 12:51:24.365473 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.367140 kubelet[3164]: E0115 12:51:24.366701 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.367140 kubelet[3164]: W0115 12:51:24.366717 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.367140 kubelet[3164]: E0115 12:51:24.367047 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.367578 kubelet[3164]: E0115 12:51:24.367339 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.367578 kubelet[3164]: W0115 12:51:24.367352 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.367578 kubelet[3164]: E0115 12:51:24.367452 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.367981 kubelet[3164]: E0115 12:51:24.367874 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.367981 kubelet[3164]: W0115 12:51:24.367910 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.368333 kubelet[3164]: E0115 12:51:24.368248 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.369245 kubelet[3164]: E0115 12:51:24.368711 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.369245 kubelet[3164]: W0115 12:51:24.368724 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.369426 kubelet[3164]: E0115 12:51:24.369382 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.370109 kubelet[3164]: E0115 12:51:24.370091 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.370615 kubelet[3164]: W0115 12:51:24.370217 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.371912 kubelet[3164]: E0115 12:51:24.371861 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.372280 kubelet[3164]: E0115 12:51:24.372122 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.372280 kubelet[3164]: W0115 12:51:24.372136 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.373005 kubelet[3164]: E0115 12:51:24.372837 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.373482 kubelet[3164]: E0115 12:51:24.373242 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.373482 kubelet[3164]: W0115 12:51:24.373256 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.373815 kubelet[3164]: E0115 12:51:24.373595 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.374140 kubelet[3164]: E0115 12:51:24.373957 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.374140 kubelet[3164]: W0115 12:51:24.373971 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.374619 kubelet[3164]: E0115 12:51:24.374253 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.374773 kubelet[3164]: E0115 12:51:24.374751 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.375215 kubelet[3164]: W0115 12:51:24.374829 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.375215 kubelet[3164]: E0115 12:51:24.375068 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.375556 kubelet[3164]: E0115 12:51:24.375373 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.375556 kubelet[3164]: W0115 12:51:24.375387 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.376958 kubelet[3164]: E0115 12:51:24.375844 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.377251 kubelet[3164]: E0115 12:51:24.377131 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.377251 kubelet[3164]: W0115 12:51:24.377147 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.377384 kubelet[3164]: E0115 12:51:24.377368 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.377521 kubelet[3164]: E0115 12:51:24.377484 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.377521 kubelet[3164]: W0115 12:51:24.377494 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.377521 kubelet[3164]: E0115 12:51:24.377504 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.392596 containerd[1712]: time="2025-01-15T12:51:24.392198643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9txkr,Uid:f39ecac4-4d02-4b5e-904d-e0388b990a66,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:24.394162 systemd[1]: Started cri-containerd-c4de791491d00c16a6c3f7ceab037f8192917aaf56012a8db464578cdf417d19.scope - libcontainer container c4de791491d00c16a6c3f7ceab037f8192917aaf56012a8db464578cdf417d19. Jan 15 12:51:24.436536 containerd[1712]: time="2025-01-15T12:51:24.436394883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d969cffbf-jvsvr,Uid:feca1961-1a00-4ffc-a412-2de4c1ed1bdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4de791491d00c16a6c3f7ceab037f8192917aaf56012a8db464578cdf417d19\"" Jan 15 12:51:24.438869 containerd[1712]: time="2025-01-15T12:51:24.438545319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 15 12:51:24.442879 containerd[1712]: time="2025-01-15T12:51:24.442647591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:24.442879 containerd[1712]: time="2025-01-15T12:51:24.442711631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:24.442879 containerd[1712]: time="2025-01-15T12:51:24.442726951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:24.443609 containerd[1712]: time="2025-01-15T12:51:24.443452630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:24.446030 kubelet[3164]: E0115 12:51:24.446005 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.446617 kubelet[3164]: W0115 12:51:24.446448 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.446617 kubelet[3164]: E0115 12:51:24.446479 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.446897 kubelet[3164]: E0115 12:51:24.446759 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.446897 kubelet[3164]: W0115 12:51:24.446776 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.446897 kubelet[3164]: E0115 12:51:24.446800 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.447072 kubelet[3164]: E0115 12:51:24.447059 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.447249 kubelet[3164]: W0115 12:51:24.447120 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.447249 kubelet[3164]: E0115 12:51:24.447144 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.447381 kubelet[3164]: E0115 12:51:24.447369 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.447437 kubelet[3164]: W0115 12:51:24.447426 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.447491 kubelet[3164]: E0115 12:51:24.447480 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.447720 kubelet[3164]: E0115 12:51:24.447706 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.447784 kubelet[3164]: W0115 12:51:24.447773 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.447848 kubelet[3164]: E0115 12:51:24.447837 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.448223 kubelet[3164]: E0115 12:51:24.448199 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.448309 kubelet[3164]: W0115 12:51:24.448296 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.448457 kubelet[3164]: E0115 12:51:24.448370 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.448613 kubelet[3164]: E0115 12:51:24.448600 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.448752 kubelet[3164]: W0115 12:51:24.448671 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.448752 kubelet[3164]: E0115 12:51:24.448707 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.450037 kubelet[3164]: E0115 12:51:24.449203 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.450037 kubelet[3164]: W0115 12:51:24.449221 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.450037 kubelet[3164]: E0115 12:51:24.449258 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.450993 kubelet[3164]: E0115 12:51:24.450240 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.451221 kubelet[3164]: W0115 12:51:24.451090 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.451221 kubelet[3164]: E0115 12:51:24.451152 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.452120 kubelet[3164]: E0115 12:51:24.451609 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.452120 kubelet[3164]: W0115 12:51:24.451627 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.452120 kubelet[3164]: E0115 12:51:24.451658 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.452510 kubelet[3164]: E0115 12:51:24.452470 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.452831 kubelet[3164]: W0115 12:51:24.452711 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.452831 kubelet[3164]: E0115 12:51:24.452769 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.453318 kubelet[3164]: E0115 12:51:24.453210 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.453318 kubelet[3164]: W0115 12:51:24.453229 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.453318 kubelet[3164]: E0115 12:51:24.453277 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.453667 kubelet[3164]: E0115 12:51:24.453573 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.453667 kubelet[3164]: W0115 12:51:24.453585 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.453667 kubelet[3164]: E0115 12:51:24.453623 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.454041 kubelet[3164]: E0115 12:51:24.453912 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.454041 kubelet[3164]: W0115 12:51:24.453925 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.454041 kubelet[3164]: E0115 12:51:24.453989 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.454205 kubelet[3164]: E0115 12:51:24.454192 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.454259 kubelet[3164]: W0115 12:51:24.454248 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.454382 kubelet[3164]: E0115 12:51:24.454327 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.454657 kubelet[3164]: E0115 12:51:24.454525 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.454657 kubelet[3164]: W0115 12:51:24.454540 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.454657 kubelet[3164]: E0115 12:51:24.454570 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.454847 kubelet[3164]: E0115 12:51:24.454834 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.454988 kubelet[3164]: W0115 12:51:24.454973 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.455267 kubelet[3164]: E0115 12:51:24.455202 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.455267 kubelet[3164]: W0115 12:51:24.455213 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.455541 kubelet[3164]: E0115 12:51:24.455441 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.455541 kubelet[3164]: W0115 12:51:24.455480 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.455541 kubelet[3164]: E0115 12:51:24.455503 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.455788 kubelet[3164]: E0115 12:51:24.455667 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.455996 kubelet[3164]: E0115 12:51:24.455898 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.455996 kubelet[3164]: W0115 12:51:24.455910 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.456411 kubelet[3164]: E0115 12:51:24.456086 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.456411 kubelet[3164]: E0115 12:51:24.456113 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.457563 kubelet[3164]: E0115 12:51:24.456742 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.457563 kubelet[3164]: W0115 12:51:24.457262 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.457563 kubelet[3164]: E0115 12:51:24.457349 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.458553 kubelet[3164]: E0115 12:51:24.458244 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.458553 kubelet[3164]: W0115 12:51:24.458352 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.458652 kubelet[3164]: E0115 12:51:24.458616 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.459054 kubelet[3164]: E0115 12:51:24.459038 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.459373 kubelet[3164]: W0115 12:51:24.459129 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.459610 kubelet[3164]: E0115 12:51:24.459475 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.460409 kubelet[3164]: E0115 12:51:24.460044 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.460409 kubelet[3164]: W0115 12:51:24.460060 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.460409 kubelet[3164]: E0115 12:51:24.460094 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.461587 kubelet[3164]: E0115 12:51:24.461363 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.461587 kubelet[3164]: W0115 12:51:24.461543 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.461587 kubelet[3164]: E0115 12:51:24.461560 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.467118 systemd[1]: Started cri-containerd-bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711.scope - libcontainer container bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711. Jan 15 12:51:24.470666 kubelet[3164]: E0115 12:51:24.470413 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:24.470666 kubelet[3164]: W0115 12:51:24.470436 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:24.470666 kubelet[3164]: E0115 12:51:24.470468 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:24.491311 containerd[1712]: time="2025-01-15T12:51:24.491269463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9txkr,Uid:f39ecac4-4d02-4b5e-904d-e0388b990a66,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711\"" Jan 15 12:51:25.558317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116666709.mount: Deactivated successfully. Jan 15 12:51:26.051999 containerd[1712]: time="2025-01-15T12:51:26.051250803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:26.054490 containerd[1712]: time="2025-01-15T12:51:26.054435518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 15 12:51:26.059531 containerd[1712]: time="2025-01-15T12:51:26.059469471Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:26.064229 containerd[1712]: time="2025-01-15T12:51:26.064156625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:26.065380 containerd[1712]: time="2025-01-15T12:51:26.064963024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.626382145s" Jan 15 12:51:26.065380 containerd[1712]: time="2025-01-15T12:51:26.065000104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 15 12:51:26.066089 containerd[1712]: time="2025-01-15T12:51:26.066066462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 15 12:51:26.068330 kubelet[3164]: E0115 12:51:26.068241 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwl5w" podUID="78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc" Jan 15 12:51:26.087742 containerd[1712]: time="2025-01-15T12:51:26.087707352Z" level=info msg="CreateContainer within sandbox \"c4de791491d00c16a6c3f7ceab037f8192917aaf56012a8db464578cdf417d19\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 12:51:26.137429 containerd[1712]: time="2025-01-15T12:51:26.137332844Z" level=info msg="CreateContainer within sandbox \"c4de791491d00c16a6c3f7ceab037f8192917aaf56012a8db464578cdf417d19\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a6596ba9560760cdcfc919bfc07ce4ba8e13a2990d7082bb5eefc66e2bbe0345\"" Jan 15 12:51:26.141291 containerd[1712]: time="2025-01-15T12:51:26.138018643Z" level=info msg="StartContainer for \"a6596ba9560760cdcfc919bfc07ce4ba8e13a2990d7082bb5eefc66e2bbe0345\"" Jan 15 12:51:26.166133 systemd[1]: Started cri-containerd-a6596ba9560760cdcfc919bfc07ce4ba8e13a2990d7082bb5eefc66e2bbe0345.scope - libcontainer container a6596ba9560760cdcfc919bfc07ce4ba8e13a2990d7082bb5eefc66e2bbe0345. Jan 15 12:51:26.206911 containerd[1712]: time="2025-01-15T12:51:26.206847827Z" level=info msg="StartContainer for \"a6596ba9560760cdcfc919bfc07ce4ba8e13a2990d7082bb5eefc66e2bbe0345\" returns successfully" Jan 15 12:51:27.212500 containerd[1712]: time="2025-01-15T12:51:27.212444076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:27.215364 containerd[1712]: time="2025-01-15T12:51:27.215313512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 15 12:51:27.218976 containerd[1712]: time="2025-01-15T12:51:27.218750827Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:27.225648 containerd[1712]: time="2025-01-15T12:51:27.225574458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:27.226474 containerd[1712]: time="2025-01-15T12:51:27.226302617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.159914475s" Jan 15 12:51:27.226474 containerd[1712]: time="2025-01-15T12:51:27.226340816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 15 12:51:27.230146 containerd[1712]: time="2025-01-15T12:51:27.230027331Z" level=info msg="CreateContainer within sandbox \"bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 12:51:27.261596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927563590.mount: Deactivated successfully. Jan 15 12:51:27.267661 kubelet[3164]: E0115 12:51:27.267618 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.268311 kubelet[3164]: W0115 12:51:27.268234 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.268714 kubelet[3164]: E0115 12:51:27.268293 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.269498 kubelet[3164]: E0115 12:51:27.269024 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.269650 kubelet[3164]: W0115 12:51:27.269578 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.269650 kubelet[3164]: E0115 12:51:27.269601 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.270088 kubelet[3164]: E0115 12:51:27.269972 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.270088 kubelet[3164]: W0115 12:51:27.269985 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.270088 kubelet[3164]: E0115 12:51:27.270003 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.270378 kubelet[3164]: E0115 12:51:27.270300 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.270378 kubelet[3164]: W0115 12:51:27.270314 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.270378 kubelet[3164]: E0115 12:51:27.270337 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.270882 kubelet[3164]: E0115 12:51:27.270868 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.270997 kubelet[3164]: W0115 12:51:27.270966 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.270997 kubelet[3164]: E0115 12:51:27.270981 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.271327 kubelet[3164]: E0115 12:51:27.271302 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.271327 kubelet[3164]: W0115 12:51:27.271321 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.271417 kubelet[3164]: E0115 12:51:27.271341 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.272244 kubelet[3164]: E0115 12:51:27.272206 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.272244 kubelet[3164]: W0115 12:51:27.272228 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.272244 kubelet[3164]: E0115 12:51:27.272240 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.273059 kubelet[3164]: E0115 12:51:27.272991 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.273059 kubelet[3164]: W0115 12:51:27.273012 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.273059 kubelet[3164]: E0115 12:51:27.273024 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.274721 kubelet[3164]: E0115 12:51:27.273261 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.274721 kubelet[3164]: W0115 12:51:27.273271 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.274721 kubelet[3164]: E0115 12:51:27.273282 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.274721 kubelet[3164]: E0115 12:51:27.273527 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.274721 kubelet[3164]: W0115 12:51:27.273538 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.274721 kubelet[3164]: E0115 12:51:27.273573 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.274721 kubelet[3164]: E0115 12:51:27.273844 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.274721 kubelet[3164]: W0115 12:51:27.273854 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.274721 kubelet[3164]: E0115 12:51:27.273865 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.274721 kubelet[3164]: E0115 12:51:27.274159 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.275922 kubelet[3164]: W0115 12:51:27.274170 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.275922 kubelet[3164]: E0115 12:51:27.274180 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.275922 kubelet[3164]: E0115 12:51:27.274365 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.275922 kubelet[3164]: W0115 12:51:27.274373 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.275922 kubelet[3164]: E0115 12:51:27.274385 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.275922 kubelet[3164]: E0115 12:51:27.274584 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.275922 kubelet[3164]: W0115 12:51:27.274594 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.275922 kubelet[3164]: E0115 12:51:27.274604 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.275922 kubelet[3164]: E0115 12:51:27.275217 3164 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 12:51:27.275922 kubelet[3164]: W0115 12:51:27.275231 3164 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 12:51:27.276185 containerd[1712]: time="2025-01-15T12:51:27.275823828Z" level=info msg="CreateContainer within sandbox \"bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e\"" Jan 15 12:51:27.276217 kubelet[3164]: E0115 12:51:27.275251 3164 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 12:51:27.277122 containerd[1712]: time="2025-01-15T12:51:27.277060906Z" level=info msg="StartContainer for \"a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e\"" Jan 15 12:51:27.314578 systemd[1]: Started cri-containerd-a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e.scope - libcontainer container a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e. Jan 15 12:51:27.350878 containerd[1712]: time="2025-01-15T12:51:27.350829044Z" level=info msg="StartContainer for \"a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e\" returns successfully" Jan 15 12:51:27.361056 systemd[1]: cri-containerd-a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e.scope: Deactivated successfully. Jan 15 12:51:27.386599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e-rootfs.mount: Deactivated successfully. Jan 15 12:51:28.071972 kubelet[3164]: E0115 12:51:28.070848 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwl5w" podUID="78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc" Jan 15 12:51:28.177901 kubelet[3164]: I0115 12:51:28.177173 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:28.193769 kubelet[3164]: I0115 12:51:28.193101 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d969cffbf-jvsvr" podStartSLOduration=3.565260017 podStartE2EDuration="5.193083599s" podCreationTimestamp="2025-01-15 12:51:23 +0000 UTC" firstStartedPulling="2025-01-15 12:51:24.43807548 +0000 UTC m=+14.455768709" lastFinishedPulling="2025-01-15 12:51:26.065899102 +0000 UTC m=+16.083592291" observedRunningTime="2025-01-15 12:51:27.190159907 +0000 UTC m=+17.207853176" watchObservedRunningTime="2025-01-15 12:51:28.193083599 +0000 UTC m=+18.210776788" Jan 15 12:51:28.271016 containerd[1712]: time="2025-01-15T12:51:28.270952291Z" level=info msg="shim disconnected" id=a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e namespace=k8s.io Jan 15 12:51:28.272267 containerd[1712]: time="2025-01-15T12:51:28.271350650Z" level=warning msg="cleaning up after shim disconnected" id=a216e1b13a8892d069d8672dbbfd4822a39cff2f66f43849cce4d3e533045f3e namespace=k8s.io Jan 15 12:51:28.272267 containerd[1712]: time="2025-01-15T12:51:28.271371010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:51:28.283341 containerd[1712]: time="2025-01-15T12:51:28.283284394Z" level=warning msg="cleanup warnings time=\"2025-01-15T12:51:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 15 12:51:29.186114 containerd[1712]: time="2025-01-15T12:51:29.186067578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 15 12:51:30.068689 kubelet[3164]: E0115 12:51:30.068342 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwl5w" podUID="78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc" Jan 15 12:51:31.416447 kubelet[3164]: I0115 12:51:31.416413 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:31.976600 containerd[1712]: time="2025-01-15T12:51:31.976562246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:31.979548 containerd[1712]: time="2025-01-15T12:51:31.979405563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 15 12:51:31.983594 containerd[1712]: time="2025-01-15T12:51:31.983523957Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:31.988657 containerd[1712]: time="2025-01-15T12:51:31.988503830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:31.989971 containerd[1712]: time="2025-01-15T12:51:31.989918548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.80381317s" Jan 15 12:51:31.990036 containerd[1712]: time="2025-01-15T12:51:31.989976108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 15 12:51:31.992727 containerd[1712]: time="2025-01-15T12:51:31.992690344Z" level=info msg="CreateContainer within sandbox \"bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 12:51:32.034442 containerd[1712]: time="2025-01-15T12:51:32.034321686Z" level=info msg="CreateContainer within sandbox \"bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9\"" Jan 15 12:51:32.034956 containerd[1712]: time="2025-01-15T12:51:32.034863645Z" level=info msg="StartContainer for \"67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9\"" Jan 15 12:51:32.066162 systemd[1]: Started cri-containerd-67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9.scope - libcontainer container 67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9. Jan 15 12:51:32.068218 kubelet[3164]: E0115 12:51:32.068090 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwl5w" podUID="78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc" Jan 15 12:51:32.112308 containerd[1712]: time="2025-01-15T12:51:32.112254937Z" level=info msg="StartContainer for \"67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9\" returns successfully" Jan 15 12:51:33.675730 containerd[1712]: time="2025-01-15T12:51:33.675671317Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 12:51:33.680913 systemd[1]: cri-containerd-67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9.scope: Deactivated successfully. Jan 15 12:51:33.699822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9-rootfs.mount: Deactivated successfully. Jan 15 12:51:33.741841 kubelet[3164]: I0115 12:51:33.741627 3164 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 15 12:51:33.782651 systemd[1]: Created slice kubepods-burstable-pod255b2dd6_7cc6_46c7_9328_d0c8736c3451.slice - libcontainer container kubepods-burstable-pod255b2dd6_7cc6_46c7_9328_d0c8736c3451.slice. Jan 15 12:51:33.796668 systemd[1]: Created slice kubepods-besteffort-pod9db81a82_d2ca_4841_8eb8_517bd155d320.slice - libcontainer container kubepods-besteffort-pod9db81a82_d2ca_4841_8eb8_517bd155d320.slice. Jan 15 12:51:33.805746 systemd[1]: Created slice kubepods-burstable-pod29ce35b0_776c_47e4_8693_df783bd5b593.slice - libcontainer container kubepods-burstable-pod29ce35b0_776c_47e4_8693_df783bd5b593.slice. Jan 15 12:51:33.814784 systemd[1]: Created slice kubepods-besteffort-podb559d165_5f82_41f6_b8f0_2a1034da5c7c.slice - libcontainer container kubepods-besteffort-podb559d165_5f82_41f6_b8f0_2a1034da5c7c.slice. Jan 15 12:51:33.819712 systemd[1]: Created slice kubepods-besteffort-poda02275ac_e608_4dc2_9a86_dab48a463c3a.slice - libcontainer container kubepods-besteffort-poda02275ac_e608_4dc2_9a86_dab48a463c3a.slice. Jan 15 12:51:33.919996 kubelet[3164]: I0115 12:51:33.919819 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68hb\" (UniqueName: \"kubernetes.io/projected/a02275ac-e608-4dc2-9a86-dab48a463c3a-kube-api-access-n68hb\") pod \"calico-apiserver-5fff95f6db-dsg6r\" (UID: \"a02275ac-e608-4dc2-9a86-dab48a463c3a\") " pod="calico-apiserver/calico-apiserver-5fff95f6db-dsg6r" Jan 15 12:51:33.919996 kubelet[3164]: I0115 12:51:33.919867 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b559d165-5f82-41f6-b8f0-2a1034da5c7c-calico-apiserver-certs\") pod \"calico-apiserver-5fff95f6db-mwpbr\" (UID: \"b559d165-5f82-41f6-b8f0-2a1034da5c7c\") " pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" Jan 15 12:51:33.919996 kubelet[3164]: I0115 12:51:33.919887 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmnzz\" (UniqueName: \"kubernetes.io/projected/b559d165-5f82-41f6-b8f0-2a1034da5c7c-kube-api-access-wmnzz\") pod \"calico-apiserver-5fff95f6db-mwpbr\" (UID: \"b559d165-5f82-41f6-b8f0-2a1034da5c7c\") " pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" Jan 15 12:51:33.919996 kubelet[3164]: I0115 12:51:33.919903 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/255b2dd6-7cc6-46c7-9328-d0c8736c3451-config-volume\") pod \"coredns-6f6b679f8f-8mfwq\" (UID: \"255b2dd6-7cc6-46c7-9328-d0c8736c3451\") " pod="kube-system/coredns-6f6b679f8f-8mfwq" Jan 15 12:51:33.919996 kubelet[3164]: I0115 12:51:33.919922 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7xpg\" (UniqueName: \"kubernetes.io/projected/255b2dd6-7cc6-46c7-9328-d0c8736c3451-kube-api-access-n7xpg\") pod \"coredns-6f6b679f8f-8mfwq\" (UID: \"255b2dd6-7cc6-46c7-9328-d0c8736c3451\") " pod="kube-system/coredns-6f6b679f8f-8mfwq" Jan 15 12:51:33.920252 kubelet[3164]: I0115 12:51:33.919954 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ce35b0-776c-47e4-8693-df783bd5b593-config-volume\") pod \"coredns-6f6b679f8f-mgzgv\" (UID: \"29ce35b0-776c-47e4-8693-df783bd5b593\") " pod="kube-system/coredns-6f6b679f8f-mgzgv" Jan 15 12:51:33.920252 kubelet[3164]: I0115 12:51:33.919975 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47hh\" (UniqueName: \"kubernetes.io/projected/29ce35b0-776c-47e4-8693-df783bd5b593-kube-api-access-w47hh\") pod \"coredns-6f6b679f8f-mgzgv\" (UID: \"29ce35b0-776c-47e4-8693-df783bd5b593\") " pod="kube-system/coredns-6f6b679f8f-mgzgv" Jan 15 12:51:33.920252 kubelet[3164]: I0115 12:51:33.919993 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9db81a82-d2ca-4841-8eb8-517bd155d320-tigera-ca-bundle\") pod \"calico-kube-controllers-67656bbdb8-tghv9\" (UID: \"9db81a82-d2ca-4841-8eb8-517bd155d320\") " pod="calico-system/calico-kube-controllers-67656bbdb8-tghv9" Jan 15 12:51:33.920252 kubelet[3164]: I0115 12:51:33.920011 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a02275ac-e608-4dc2-9a86-dab48a463c3a-calico-apiserver-certs\") pod \"calico-apiserver-5fff95f6db-dsg6r\" (UID: \"a02275ac-e608-4dc2-9a86-dab48a463c3a\") " pod="calico-apiserver/calico-apiserver-5fff95f6db-dsg6r" Jan 15 12:51:33.920252 kubelet[3164]: I0115 12:51:33.920026 3164 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddbg4\" (UniqueName: \"kubernetes.io/projected/9db81a82-d2ca-4841-8eb8-517bd155d320-kube-api-access-ddbg4\") pod \"calico-kube-controllers-67656bbdb8-tghv9\" (UID: \"9db81a82-d2ca-4841-8eb8-517bd155d320\") " pod="calico-system/calico-kube-controllers-67656bbdb8-tghv9" Jan 15 12:51:34.078973 systemd[1]: Created slice kubepods-besteffort-pod78c83d5e_cbb0_4ecd_a9c6_5aa6606ecbcc.slice - libcontainer container kubepods-besteffort-pod78c83d5e_cbb0_4ecd_a9c6_5aa6606ecbcc.slice. Jan 15 12:51:34.081495 containerd[1712]: time="2025-01-15T12:51:34.081424511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwl5w,Uid:78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:34.162963 containerd[1712]: time="2025-01-15T12:51:34.162659998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8mfwq,Uid:255b2dd6-7cc6-46c7-9328-d0c8736c3451,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:34.163658 containerd[1712]: time="2025-01-15T12:51:34.163332237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67656bbdb8-tghv9,Uid:9db81a82-d2ca-4841-8eb8-517bd155d320,Namespace:calico-system,Attempt:0,}" Jan 15 12:51:34.163658 containerd[1712]: time="2025-01-15T12:51:34.163359197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mgzgv,Uid:29ce35b0-776c-47e4-8693-df783bd5b593,Namespace:kube-system,Attempt:0,}" Jan 15 12:51:34.163658 containerd[1712]: time="2025-01-15T12:51:34.163511836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-mwpbr,Uid:b559d165-5f82-41f6-b8f0-2a1034da5c7c,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:51:34.163968 containerd[1712]: time="2025-01-15T12:51:34.163921476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-dsg6r,Uid:a02275ac-e608-4dc2-9a86-dab48a463c3a,Namespace:calico-apiserver,Attempt:0,}" Jan 15 12:51:36.623797 containerd[1712]: time="2025-01-15T12:51:36.623717805Z" level=info msg="shim disconnected" id=67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9 namespace=k8s.io Jan 15 12:51:36.623797 containerd[1712]: time="2025-01-15T12:51:36.623789325Z" level=warning msg="cleaning up after shim disconnected" id=67801e29eeb4d9eefd6c5656a5133d9847e28d58ffe851c40fc58ae373a437a9 namespace=k8s.io Jan 15 12:51:36.623797 containerd[1712]: time="2025-01-15T12:51:36.623799765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 15 12:51:37.155579 containerd[1712]: time="2025-01-15T12:51:37.155533125Z" level=error msg="Failed to destroy network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.156217 containerd[1712]: time="2025-01-15T12:51:37.156059004Z" level=error msg="encountered an error cleaning up failed sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.156217 containerd[1712]: time="2025-01-15T12:51:37.156112324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8mfwq,Uid:255b2dd6-7cc6-46c7-9328-d0c8736c3451,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.156398 kubelet[3164]: E0115 12:51:37.156341 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.156677 kubelet[3164]: E0115 12:51:37.156421 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8mfwq" Jan 15 12:51:37.156677 kubelet[3164]: E0115 12:51:37.156439 3164 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8mfwq" Jan 15 12:51:37.156677 kubelet[3164]: E0115 12:51:37.156482 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8mfwq_kube-system(255b2dd6-7cc6-46c7-9328-d0c8736c3451)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8mfwq_kube-system(255b2dd6-7cc6-46c7-9328-d0c8736c3451)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8mfwq" podUID="255b2dd6-7cc6-46c7-9328-d0c8736c3451" Jan 15 12:51:37.206101 containerd[1712]: time="2025-01-15T12:51:37.205919053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 15 12:51:37.207704 kubelet[3164]: I0115 12:51:37.207613 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:51:37.208637 containerd[1712]: time="2025-01-15T12:51:37.208403250Z" level=info msg="StopPodSandbox for \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\"" Jan 15 12:51:37.208637 containerd[1712]: time="2025-01-15T12:51:37.208547809Z" level=info msg="Ensure that sandbox 35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d in task-service has been cleanup successfully" Jan 15 12:51:37.242430 containerd[1712]: time="2025-01-15T12:51:37.242296121Z" level=error msg="StopPodSandbox for \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\" failed" error="failed to destroy network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.242751 kubelet[3164]: E0115 12:51:37.242714 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:51:37.242822 kubelet[3164]: E0115 12:51:37.242772 3164 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d"} Jan 15 12:51:37.242850 kubelet[3164]: E0115 12:51:37.242827 3164 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"255b2dd6-7cc6-46c7-9328-d0c8736c3451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:37.242904 kubelet[3164]: E0115 12:51:37.242850 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"255b2dd6-7cc6-46c7-9328-d0c8736c3451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8mfwq" podUID="255b2dd6-7cc6-46c7-9328-d0c8736c3451" Jan 15 12:51:37.269751 containerd[1712]: time="2025-01-15T12:51:37.269647802Z" level=error msg="Failed to destroy network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.270486 containerd[1712]: time="2025-01-15T12:51:37.270132321Z" level=error msg="encountered an error cleaning up failed sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.270486 containerd[1712]: time="2025-01-15T12:51:37.270193121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mgzgv,Uid:29ce35b0-776c-47e4-8693-df783bd5b593,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.270602 kubelet[3164]: E0115 12:51:37.270394 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.270602 kubelet[3164]: E0115 12:51:37.270447 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mgzgv" Jan 15 12:51:37.270602 kubelet[3164]: E0115 12:51:37.270470 3164 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-mgzgv" Jan 15 12:51:37.270688 kubelet[3164]: E0115 12:51:37.270513 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-mgzgv_kube-system(29ce35b0-776c-47e4-8693-df783bd5b593)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-mgzgv_kube-system(29ce35b0-776c-47e4-8693-df783bd5b593)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mgzgv" podUID="29ce35b0-776c-47e4-8693-df783bd5b593" Jan 15 12:51:37.398656 containerd[1712]: time="2025-01-15T12:51:37.398600138Z" level=error msg="Failed to destroy network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.398975 containerd[1712]: time="2025-01-15T12:51:37.398945217Z" level=error msg="encountered an error cleaning up failed sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.399055 containerd[1712]: time="2025-01-15T12:51:37.398997777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwl5w,Uid:78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.399604 kubelet[3164]: E0115 12:51:37.399210 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.399604 kubelet[3164]: E0115 12:51:37.399268 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwl5w" Jan 15 12:51:37.399604 kubelet[3164]: E0115 12:51:37.399287 3164 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwl5w" Jan 15 12:51:37.399702 kubelet[3164]: E0115 12:51:37.399328 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rwl5w_calico-system(78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rwl5w_calico-system(78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwl5w" podUID="78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc" Jan 15 12:51:37.444783 containerd[1712]: time="2025-01-15T12:51:37.444650312Z" level=error msg="Failed to destroy network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.445788 containerd[1712]: time="2025-01-15T12:51:37.445209031Z" level=error msg="encountered an error cleaning up failed sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.445788 containerd[1712]: time="2025-01-15T12:51:37.445258511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67656bbdb8-tghv9,Uid:9db81a82-d2ca-4841-8eb8-517bd155d320,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.446555 kubelet[3164]: E0115 12:51:37.445483 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.446555 kubelet[3164]: E0115 12:51:37.445550 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67656bbdb8-tghv9" Jan 15 12:51:37.446555 kubelet[3164]: E0115 12:51:37.445583 3164 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67656bbdb8-tghv9" Jan 15 12:51:37.446650 kubelet[3164]: E0115 12:51:37.445627 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67656bbdb8-tghv9_calico-system(9db81a82-d2ca-4841-8eb8-517bd155d320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67656bbdb8-tghv9_calico-system(9db81a82-d2ca-4841-8eb8-517bd155d320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67656bbdb8-tghv9" podUID="9db81a82-d2ca-4841-8eb8-517bd155d320" Jan 15 12:51:37.558847 containerd[1712]: time="2025-01-15T12:51:37.558790109Z" level=error msg="Failed to destroy network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.559172 containerd[1712]: time="2025-01-15T12:51:37.559141548Z" level=error msg="encountered an error cleaning up failed sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.559226 containerd[1712]: time="2025-01-15T12:51:37.559202268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-dsg6r,Uid:a02275ac-e608-4dc2-9a86-dab48a463c3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.559788 kubelet[3164]: E0115 12:51:37.559440 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.559788 kubelet[3164]: E0115 12:51:37.559506 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff95f6db-dsg6r" Jan 15 12:51:37.559788 kubelet[3164]: E0115 12:51:37.559524 3164 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff95f6db-dsg6r" Jan 15 12:51:37.559911 kubelet[3164]: E0115 12:51:37.559569 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fff95f6db-dsg6r_calico-apiserver(a02275ac-e608-4dc2-9a86-dab48a463c3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fff95f6db-dsg6r_calico-apiserver(a02275ac-e608-4dc2-9a86-dab48a463c3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fff95f6db-dsg6r" podUID="a02275ac-e608-4dc2-9a86-dab48a463c3a" Jan 15 12:51:37.656591 containerd[1712]: time="2025-01-15T12:51:37.656540129Z" level=error msg="Failed to destroy network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.657426 containerd[1712]: time="2025-01-15T12:51:37.657233008Z" level=error msg="encountered an error cleaning up failed sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.657426 containerd[1712]: time="2025-01-15T12:51:37.657307168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-mwpbr,Uid:b559d165-5f82-41f6-b8f0-2a1034da5c7c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.657610 kubelet[3164]: E0115 12:51:37.657563 3164 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:37.657698 kubelet[3164]: E0115 12:51:37.657628 3164 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" Jan 15 12:51:37.657698 kubelet[3164]: E0115 12:51:37.657648 3164 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" Jan 15 12:51:37.657786 kubelet[3164]: E0115 12:51:37.657692 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fff95f6db-mwpbr_calico-apiserver(b559d165-5f82-41f6-b8f0-2a1034da5c7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fff95f6db-mwpbr_calico-apiserver(b559d165-5f82-41f6-b8f0-2a1034da5c7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" podUID="b559d165-5f82-41f6-b8f0-2a1034da5c7c" Jan 15 12:51:37.762976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4-shm.mount: Deactivated successfully. Jan 15 12:51:37.763071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245-shm.mount: Deactivated successfully. Jan 15 12:51:37.763121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d-shm.mount: Deactivated successfully. Jan 15 12:51:38.211748 kubelet[3164]: I0115 12:51:38.210015 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:51:38.212175 containerd[1712]: time="2025-01-15T12:51:38.212002415Z" level=info msg="StopPodSandbox for \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\"" Jan 15 12:51:38.212175 containerd[1712]: time="2025-01-15T12:51:38.212160135Z" level=info msg="Ensure that sandbox 863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245 in task-service has been cleanup successfully" Jan 15 12:51:38.214176 kubelet[3164]: I0115 12:51:38.213475 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:51:38.214273 containerd[1712]: time="2025-01-15T12:51:38.213920533Z" level=info msg="StopPodSandbox for \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\"" Jan 15 12:51:38.214273 containerd[1712]: time="2025-01-15T12:51:38.214082132Z" level=info msg="Ensure that sandbox 6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4 in task-service has been cleanup successfully" Jan 15 12:51:38.217931 kubelet[3164]: I0115 12:51:38.217565 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:51:38.218652 containerd[1712]: time="2025-01-15T12:51:38.218311046Z" level=info msg="StopPodSandbox for \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\"" Jan 15 12:51:38.218652 containerd[1712]: time="2025-01-15T12:51:38.218456206Z" level=info msg="Ensure that sandbox 59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105 in task-service has been cleanup successfully" Jan 15 12:51:38.221999 kubelet[3164]: I0115 12:51:38.221075 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:51:38.222303 containerd[1712]: time="2025-01-15T12:51:38.222270481Z" level=info msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\"" Jan 15 12:51:38.222929 containerd[1712]: time="2025-01-15T12:51:38.222896280Z" level=info msg="Ensure that sandbox 14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798 in task-service has been cleanup successfully" Jan 15 12:51:38.225867 kubelet[3164]: I0115 12:51:38.225810 3164 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:51:38.226406 containerd[1712]: time="2025-01-15T12:51:38.226307795Z" level=info msg="StopPodSandbox for \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\"" Jan 15 12:51:38.226482 containerd[1712]: time="2025-01-15T12:51:38.226449515Z" level=info msg="Ensure that sandbox b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7 in task-service has been cleanup successfully" Jan 15 12:51:38.278734 containerd[1712]: time="2025-01-15T12:51:38.278443200Z" level=error msg="StopPodSandbox for \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\" failed" error="failed to destroy network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:38.278863 kubelet[3164]: E0115 12:51:38.278648 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:51:38.278863 kubelet[3164]: E0115 12:51:38.278689 3164 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245"} Jan 15 12:51:38.278863 kubelet[3164]: E0115 12:51:38.278720 3164 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:38.278863 kubelet[3164]: E0115 12:51:38.278746 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwl5w" podUID="78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc" Jan 15 12:51:38.282835 containerd[1712]: time="2025-01-15T12:51:38.282788594Z" level=error msg="StopPodSandbox for \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\" failed" error="failed to destroy network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:38.283222 kubelet[3164]: E0115 12:51:38.282969 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:51:38.283222 kubelet[3164]: E0115 12:51:38.283004 3164 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105"} Jan 15 12:51:38.283222 kubelet[3164]: E0115 12:51:38.283035 3164 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a02275ac-e608-4dc2-9a86-dab48a463c3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:38.283222 kubelet[3164]: E0115 12:51:38.283063 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a02275ac-e608-4dc2-9a86-dab48a463c3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fff95f6db-dsg6r" podUID="a02275ac-e608-4dc2-9a86-dab48a463c3a" Jan 15 12:51:38.292362 containerd[1712]: time="2025-01-15T12:51:38.292316541Z" level=error msg="StopPodSandbox for \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\" failed" error="failed to destroy network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:38.292562 kubelet[3164]: E0115 12:51:38.292534 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:51:38.292643 kubelet[3164]: E0115 12:51:38.292575 3164 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4"} Jan 15 12:51:38.292643 kubelet[3164]: E0115 12:51:38.292604 3164 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9db81a82-d2ca-4841-8eb8-517bd155d320\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:38.292643 kubelet[3164]: E0115 12:51:38.292623 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9db81a82-d2ca-4841-8eb8-517bd155d320\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67656bbdb8-tghv9" podUID="9db81a82-d2ca-4841-8eb8-517bd155d320" Jan 15 12:51:38.295579 containerd[1712]: time="2025-01-15T12:51:38.295541416Z" level=error msg="StopPodSandbox for \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\" failed" error="failed to destroy network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:38.295861 kubelet[3164]: E0115 12:51:38.295823 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:51:38.295926 kubelet[3164]: E0115 12:51:38.295869 3164 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7"} Jan 15 12:51:38.295926 kubelet[3164]: E0115 12:51:38.295897 3164 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29ce35b0-776c-47e4-8693-df783bd5b593\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:38.295926 kubelet[3164]: E0115 12:51:38.295914 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29ce35b0-776c-47e4-8693-df783bd5b593\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-mgzgv" podUID="29ce35b0-776c-47e4-8693-df783bd5b593" Jan 15 12:51:38.296786 containerd[1712]: time="2025-01-15T12:51:38.296690734Z" level=error msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" failed" error="failed to destroy network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:38.296900 kubelet[3164]: E0115 12:51:38.296847 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:51:38.297004 kubelet[3164]: E0115 12:51:38.296905 3164 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798"} Jan 15 12:51:38.297004 kubelet[3164]: E0115 12:51:38.296929 3164 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b559d165-5f82-41f6-b8f0-2a1034da5c7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:38.297108 kubelet[3164]: E0115 12:51:38.296970 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b559d165-5f82-41f6-b8f0-2a1034da5c7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" podUID="b559d165-5f82-41f6-b8f0-2a1034da5c7c" Jan 15 12:51:46.190329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899074669.mount: Deactivated successfully. Jan 15 12:51:49.771079 containerd[1712]: time="2025-01-15T12:51:49.771017493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:49.773803 containerd[1712]: time="2025-01-15T12:51:49.773675729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 15 12:51:49.776573 containerd[1712]: time="2025-01-15T12:51:49.776493365Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:49.780692 containerd[1712]: time="2025-01-15T12:51:49.780644039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:49.781376 containerd[1712]: time="2025-01-15T12:51:49.781214478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 12.575190305s" Jan 15 12:51:49.781376 containerd[1712]: time="2025-01-15T12:51:49.781249638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 15 12:51:49.789919 containerd[1712]: time="2025-01-15T12:51:49.789885586Z" level=info msg="CreateContainer within sandbox \"bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 12:51:49.834468 containerd[1712]: time="2025-01-15T12:51:49.834381121Z" level=info msg="CreateContainer within sandbox \"bc537d4dc306ee1c4159768911effd397af7bfd90e411fa5253a5afd06554711\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329\"" Jan 15 12:51:49.835115 containerd[1712]: time="2025-01-15T12:51:49.835062080Z" level=info msg="StartContainer for \"cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329\"" Jan 15 12:51:49.867211 systemd[1]: Started cri-containerd-cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329.scope - libcontainer container cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329. Jan 15 12:51:49.897716 containerd[1712]: time="2025-01-15T12:51:49.897573069Z" level=info msg="StartContainer for \"cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329\" returns successfully" Jan 15 12:51:50.070852 containerd[1712]: time="2025-01-15T12:51:50.068680939Z" level=info msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\"" Jan 15 12:51:50.100420 containerd[1712]: time="2025-01-15T12:51:50.100366533Z" level=error msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" failed" error="failed to destroy network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 12:51:50.100622 kubelet[3164]: E0115 12:51:50.100571 3164 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:51:50.100970 kubelet[3164]: E0115 12:51:50.100625 3164 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798"} Jan 15 12:51:50.100970 kubelet[3164]: E0115 12:51:50.100660 3164 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b559d165-5f82-41f6-b8f0-2a1034da5c7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 15 12:51:50.100970 kubelet[3164]: E0115 12:51:50.100680 3164 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b559d165-5f82-41f6-b8f0-2a1034da5c7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" podUID="b559d165-5f82-41f6-b8f0-2a1034da5c7c" Jan 15 12:51:50.149889 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 12:51:50.150036 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 12:51:51.069067 containerd[1712]: time="2025-01-15T12:51:51.068996359Z" level=info msg="StopPodSandbox for \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\"" Jan 15 12:51:51.119497 kubelet[3164]: I0115 12:51:51.119137 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9txkr" podStartSLOduration=1.829689109 podStartE2EDuration="27.119118126s" podCreationTimestamp="2025-01-15 12:51:24 +0000 UTC" firstStartedPulling="2025-01-15 12:51:24.49268826 +0000 UTC m=+14.510381489" lastFinishedPulling="2025-01-15 12:51:49.782117317 +0000 UTC m=+39.799810506" observedRunningTime="2025-01-15 12:51:50.273124801 +0000 UTC m=+40.290818030" watchObservedRunningTime="2025-01-15 12:51:51.119118126 +0000 UTC m=+41.136811315" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.117 [INFO][4286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.117 [INFO][4286] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" iface="eth0" netns="/var/run/netns/cni-8bffea03-72b5-814f-c04d-2634ae6d8f24" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.118 [INFO][4286] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" iface="eth0" netns="/var/run/netns/cni-8bffea03-72b5-814f-c04d-2634ae6d8f24" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.119 [INFO][4286] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" iface="eth0" netns="/var/run/netns/cni-8bffea03-72b5-814f-c04d-2634ae6d8f24" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.119 [INFO][4286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.119 [INFO][4286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.140 [INFO][4292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.140 [INFO][4292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.140 [INFO][4292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.148 [WARNING][4292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.148 [INFO][4292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.158 [INFO][4292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:51.162249 containerd[1712]: 2025-01-15 12:51:51.159 [INFO][4286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:51:51.163029 containerd[1712]: time="2025-01-15T12:51:51.162840302Z" level=info msg="TearDown network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\" successfully" Jan 15 12:51:51.163029 containerd[1712]: time="2025-01-15T12:51:51.162874062Z" level=info msg="StopPodSandbox for \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\" returns successfully" Jan 15 12:51:51.166293 containerd[1712]: time="2025-01-15T12:51:51.166189778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mgzgv,Uid:29ce35b0-776c-47e4-8693-df783bd5b593,Namespace:kube-system,Attempt:1,}" Jan 15 12:51:51.166601 systemd[1]: run-netns-cni\x2d8bffea03\x2d72b5\x2d814f\x2dc04d\x2d2634ae6d8f24.mount: Deactivated successfully. Jan 15 12:51:52.069884 containerd[1712]: time="2025-01-15T12:51:52.069838579Z" level=info msg="StopPodSandbox for \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\"" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.116 [INFO][4363] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.117 [INFO][4363] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" iface="eth0" netns="/var/run/netns/cni-b17375fa-cbc2-7a2c-1694-36ddab06183c" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.118 [INFO][4363] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" iface="eth0" netns="/var/run/netns/cni-b17375fa-cbc2-7a2c-1694-36ddab06183c" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.118 [INFO][4363] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" iface="eth0" netns="/var/run/netns/cni-b17375fa-cbc2-7a2c-1694-36ddab06183c" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.118 [INFO][4363] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.118 [INFO][4363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.137 [INFO][4370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.137 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.137 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.145 [WARNING][4370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.146 [INFO][4370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.147 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:52.149897 containerd[1712]: 2025-01-15 12:51:52.148 [INFO][4363] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:51:52.152233 containerd[1712]: time="2025-01-15T12:51:52.151801740Z" level=info msg="TearDown network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\" successfully" Jan 15 12:51:52.152233 containerd[1712]: time="2025-01-15T12:51:52.151847019Z" level=info msg="StopPodSandbox for \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\" returns successfully" Jan 15 12:51:52.152468 systemd[1]: run-netns-cni\x2db17375fa\x2dcbc2\x2d7a2c\x2d1694\x2d36ddab06183c.mount: Deactivated successfully. Jan 15 12:51:52.153071 containerd[1712]: time="2025-01-15T12:51:52.152877178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8mfwq,Uid:255b2dd6-7cc6-46c7-9328-d0c8736c3451,Namespace:kube-system,Attempt:1,}" Jan 15 12:51:53.068915 containerd[1712]: time="2025-01-15T12:51:53.068863914Z" level=info msg="StopPodSandbox for \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\"" Jan 15 12:51:53.069137 containerd[1712]: time="2025-01-15T12:51:53.069076114Z" level=info msg="StopPodSandbox for \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\"" Jan 15 12:51:53.070950 containerd[1712]: time="2025-01-15T12:51:53.070746551Z" level=info msg="StopPodSandbox for \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\"" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.149 [INFO][4431] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.150 [INFO][4431] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" iface="eth0" netns="/var/run/netns/cni-6d480553-113c-ff07-6f1e-18c73af6015f" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.150 [INFO][4431] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" iface="eth0" netns="/var/run/netns/cni-6d480553-113c-ff07-6f1e-18c73af6015f" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.151 [INFO][4431] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" iface="eth0" netns="/var/run/netns/cni-6d480553-113c-ff07-6f1e-18c73af6015f" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.151 [INFO][4431] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.151 [INFO][4431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.188 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.188 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.188 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.201 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.202 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.203 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:53.206631 containerd[1712]: 2025-01-15 12:51:53.205 [INFO][4431] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:51:53.207915 containerd[1712]: time="2025-01-15T12:51:53.206831350Z" level=info msg="TearDown network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\" successfully" Jan 15 12:51:53.207915 containerd[1712]: time="2025-01-15T12:51:53.206862350Z" level=info msg="StopPodSandbox for \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\" returns successfully" Jan 15 12:51:53.209160 systemd[1]: run-netns-cni\x2d6d480553\x2d113c\x2dff07\x2d6f1e\x2d18c73af6015f.mount: Deactivated successfully. Jan 15 12:51:53.211209 containerd[1712]: time="2025-01-15T12:51:53.210450065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67656bbdb8-tghv9,Uid:9db81a82-d2ca-4841-8eb8-517bd155d320,Namespace:calico-system,Attempt:1,}" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.158 [INFO][4427] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.158 [INFO][4427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" iface="eth0" netns="/var/run/netns/cni-cfbba2a2-d67c-ce6a-a20e-7e64121583fa" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.158 [INFO][4427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" iface="eth0" netns="/var/run/netns/cni-cfbba2a2-d67c-ce6a-a20e-7e64121583fa" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.159 [INFO][4427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" iface="eth0" netns="/var/run/netns/cni-cfbba2a2-d67c-ce6a-a20e-7e64121583fa" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.159 [INFO][4427] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.159 [INFO][4427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.192 [INFO][4453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.192 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.203 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.215 [WARNING][4453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.215 [INFO][4453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.217 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:53.221274 containerd[1712]: 2025-01-15 12:51:53.220 [INFO][4427] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:51:53.224591 containerd[1712]: time="2025-01-15T12:51:53.221816768Z" level=info msg="TearDown network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\" successfully" Jan 15 12:51:53.224591 containerd[1712]: time="2025-01-15T12:51:53.222075088Z" level=info msg="StopPodSandbox for \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\" returns successfully" Jan 15 12:51:53.226875 containerd[1712]: time="2025-01-15T12:51:53.226568321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-dsg6r,Uid:a02275ac-e608-4dc2-9a86-dab48a463c3a,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:51:53.227268 systemd[1]: run-netns-cni\x2dcfbba2a2\x2dd67c\x2dce6a\x2da20e\x2d7e64121583fa.mount: Deactivated successfully. Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.154 [INFO][4435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.155 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" iface="eth0" netns="/var/run/netns/cni-72c8010d-239a-81e2-a58e-da3cb0da6ef5" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.156 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" iface="eth0" netns="/var/run/netns/cni-72c8010d-239a-81e2-a58e-da3cb0da6ef5" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.156 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" iface="eth0" netns="/var/run/netns/cni-72c8010d-239a-81e2-a58e-da3cb0da6ef5" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.156 [INFO][4435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.157 [INFO][4435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.197 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.197 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.217 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.230 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.230 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.231 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:53.234466 containerd[1712]: 2025-01-15 12:51:53.232 [INFO][4435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:51:53.234466 containerd[1712]: time="2025-01-15T12:51:53.234520950Z" level=info msg="TearDown network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\" successfully" Jan 15 12:51:53.234466 containerd[1712]: time="2025-01-15T12:51:53.234545910Z" level=info msg="StopPodSandbox for \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\" returns successfully" Jan 15 12:51:53.236465 systemd[1]: run-netns-cni\x2d72c8010d\x2d239a\x2d81e2\x2da58e\x2dda3cb0da6ef5.mount: Deactivated successfully. Jan 15 12:51:53.237857 containerd[1712]: time="2025-01-15T12:51:53.237492905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwl5w,Uid:78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc,Namespace:calico-system,Attempt:1,}" Jan 15 12:51:53.429046 kernel: bpftool[4521]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 15 12:51:53.969675 systemd-networkd[1338]: vxlan.calico: Link UP Jan 15 12:51:53.969686 systemd-networkd[1338]: vxlan.calico: Gained carrier Jan 15 12:51:54.067564 kubelet[3164]: I0115 12:51:54.066708 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:51:54.595163 systemd-networkd[1338]: calic6cb19c0bc1: Link UP Jan 15 12:51:54.597383 systemd-networkd[1338]: calic6cb19c0bc1: Gained carrier Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.184 [INFO][4562] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0 calico-apiserver-5fff95f6db- calico-apiserver a02275ac-e608-4dc2-9a86-dab48a463c3a 754 0 2025-01-15 12:51:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fff95f6db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-c63c213d7c calico-apiserver-5fff95f6db-dsg6r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic6cb19c0bc1 [] []}} ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.184 [INFO][4562] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.387 [INFO][4638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" HandleID="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.443 [INFO][4638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" HandleID="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-c63c213d7c", "pod":"calico-apiserver-5fff95f6db-dsg6r", "timestamp":"2025-01-15 12:51:54.387865407 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c63c213d7c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.443 [INFO][4638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.443 [INFO][4638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.443 [INFO][4638] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c63c213d7c' Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.458 [INFO][4638] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.542 [INFO][4638] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.554 [INFO][4638] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.556 [INFO][4638] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.559 [INFO][4638] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.559 [INFO][4638] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.564 [INFO][4638] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.569 [INFO][4638] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.580 [INFO][4638] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.65/26] block=192.168.12.64/26 handle="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.580 [INFO][4638] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.65/26] handle="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.580 [INFO][4638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:54.626300 containerd[1712]: 2025-01-15 12:51:54.580 [INFO][4638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.65/26] IPv6=[] ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" HandleID="k8s-pod-network.e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:54.628669 containerd[1712]: 2025-01-15 12:51:54.584 [INFO][4562] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"a02275ac-e608-4dc2-9a86-dab48a463c3a", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"", Pod:"calico-apiserver-5fff95f6db-dsg6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6cb19c0bc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.628669 containerd[1712]: 2025-01-15 12:51:54.584 [INFO][4562] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.65/32] ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:54.628669 containerd[1712]: 2025-01-15 12:51:54.585 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6cb19c0bc1 ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:54.628669 containerd[1712]: 2025-01-15 12:51:54.597 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:54.628669 containerd[1712]: 2025-01-15 12:51:54.598 [INFO][4562] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"a02275ac-e608-4dc2-9a86-dab48a463c3a", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d", Pod:"calico-apiserver-5fff95f6db-dsg6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6cb19c0bc1", MAC:"1e:e9:d8:4d:32:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.628669 containerd[1712]: 2025-01-15 12:51:54.624 [INFO][4562] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-dsg6r" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:51:54.675845 containerd[1712]: time="2025-01-15T12:51:54.675765982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:54.676595 containerd[1712]: time="2025-01-15T12:51:54.676053862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:54.676595 containerd[1712]: time="2025-01-15T12:51:54.676070142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:54.676595 containerd[1712]: time="2025-01-15T12:51:54.676183341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:54.717420 systemd-networkd[1338]: cali7b08768c81c: Link UP Jan 15 12:51:54.720263 systemd[1]: run-containerd-runc-k8s.io-e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d-runc.a2ooz5.mount: Deactivated successfully. Jan 15 12:51:54.722094 systemd-networkd[1338]: cali7b08768c81c: Gained carrier Jan 15 12:51:54.731598 systemd[1]: Started cri-containerd-e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d.scope - libcontainer container e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d. Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.317 [INFO][4584] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0 coredns-6f6b679f8f- kube-system 255b2dd6-7cc6-46c7-9328-d0c8736c3451 746 0 2025-01-15 12:51:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-c63c213d7c coredns-6f6b679f8f-8mfwq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b08768c81c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.322 [INFO][4584] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.461 [INFO][4655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" HandleID="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.544 [INFO][4655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" HandleID="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039b2b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-c63c213d7c", "pod":"coredns-6f6b679f8f-8mfwq", "timestamp":"2025-01-15 12:51:54.461282539 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c63c213d7c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.545 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.580 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.581 [INFO][4655] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c63c213d7c' Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.584 [INFO][4655] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.640 [INFO][4655] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.654 [INFO][4655] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.660 [INFO][4655] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.664 [INFO][4655] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.665 [INFO][4655] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.668 [INFO][4655] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4 Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.685 [INFO][4655] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.695 [INFO][4655] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.66/26] block=192.168.12.64/26 handle="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.695 [INFO][4655] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.66/26] handle="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.695 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:54.750070 containerd[1712]: 2025-01-15 12:51:54.695 [INFO][4655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.66/26] IPv6=[] ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" HandleID="k8s-pod-network.4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:54.750993 containerd[1712]: 2025-01-15 12:51:54.706 [INFO][4584] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"255b2dd6-7cc6-46c7-9328-d0c8736c3451", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"", Pod:"coredns-6f6b679f8f-8mfwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b08768c81c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.750993 containerd[1712]: 2025-01-15 12:51:54.706 [INFO][4584] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.66/32] ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:54.750993 containerd[1712]: 2025-01-15 12:51:54.706 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b08768c81c ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:54.750993 containerd[1712]: 2025-01-15 12:51:54.722 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:54.750993 containerd[1712]: 2025-01-15 12:51:54.724 [INFO][4584] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"255b2dd6-7cc6-46c7-9328-d0c8736c3451", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4", Pod:"coredns-6f6b679f8f-8mfwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b08768c81c", MAC:"9e:17:24:e6:09:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.750993 containerd[1712]: 2025-01-15 12:51:54.747 [INFO][4584] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8mfwq" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:51:54.788929 containerd[1712]: time="2025-01-15T12:51:54.788105176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:54.788929 containerd[1712]: time="2025-01-15T12:51:54.788303776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:54.788929 containerd[1712]: time="2025-01-15T12:51:54.788356376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:54.788929 containerd[1712]: time="2025-01-15T12:51:54.788533736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:54.818694 systemd[1]: Started cri-containerd-4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4.scope - libcontainer container 4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4. Jan 15 12:51:54.826771 systemd-networkd[1338]: calic0f80cb67ae: Link UP Jan 15 12:51:54.826998 systemd-networkd[1338]: calic0f80cb67ae: Gained carrier Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.267 [INFO][4573] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0 coredns-6f6b679f8f- kube-system 29ce35b0-776c-47e4-8693-df783bd5b593 742 0 2025-01-15 12:51:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-c63c213d7c coredns-6f6b679f8f-mgzgv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0f80cb67ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.269 [INFO][4573] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.421 [INFO][4648] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" HandleID="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.639 [INFO][4648] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" HandleID="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000399850), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-c63c213d7c", "pod":"coredns-6f6b679f8f-mgzgv", "timestamp":"2025-01-15 12:51:54.420548079 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c63c213d7c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.639 [INFO][4648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.695 [INFO][4648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.695 [INFO][4648] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c63c213d7c' Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.710 [INFO][4648] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.742 [INFO][4648] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.762 [INFO][4648] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.769 [INFO][4648] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.777 [INFO][4648] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.777 [INFO][4648] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.780 [INFO][4648] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2 Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.790 [INFO][4648] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.807 [INFO][4648] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.67/26] block=192.168.12.64/26 handle="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.807 [INFO][4648] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.67/26] handle="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.807 [INFO][4648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:54.858737 containerd[1712]: 2025-01-15 12:51:54.808 [INFO][4648] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.67/26] IPv6=[] ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" HandleID="k8s-pod-network.48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:54.861193 containerd[1712]: 2025-01-15 12:51:54.818 [INFO][4573] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"29ce35b0-776c-47e4-8693-df783bd5b593", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"", Pod:"coredns-6f6b679f8f-mgzgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0f80cb67ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.861193 containerd[1712]: 2025-01-15 12:51:54.819 [INFO][4573] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.67/32] ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:54.861193 containerd[1712]: 2025-01-15 12:51:54.819 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0f80cb67ae ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:54.861193 containerd[1712]: 2025-01-15 12:51:54.825 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:54.861193 containerd[1712]: 2025-01-15 12:51:54.825 [INFO][4573] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"29ce35b0-776c-47e4-8693-df783bd5b593", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2", Pod:"coredns-6f6b679f8f-mgzgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0f80cb67ae", MAC:"ba:c9:b9:11:3d:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.861193 containerd[1712]: 2025-01-15 12:51:54.845 [INFO][4573] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2" Namespace="kube-system" Pod="coredns-6f6b679f8f-mgzgv" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:51:54.895123 containerd[1712]: time="2025-01-15T12:51:54.894779819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-dsg6r,Uid:a02275ac-e608-4dc2-9a86-dab48a463c3a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d\"" Jan 15 12:51:54.900531 containerd[1712]: time="2025-01-15T12:51:54.900345531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 15 12:51:54.939450 systemd-networkd[1338]: cali190e602f27f: Link UP Jan 15 12:51:54.939588 systemd-networkd[1338]: cali190e602f27f: Gained carrier Jan 15 12:51:54.951626 containerd[1712]: time="2025-01-15T12:51:54.947056902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:54.951626 containerd[1712]: time="2025-01-15T12:51:54.947116502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:54.951626 containerd[1712]: time="2025-01-15T12:51:54.947130982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:54.951626 containerd[1712]: time="2025-01-15T12:51:54.947233021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:54.972073 containerd[1712]: time="2025-01-15T12:51:54.971916025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8mfwq,Uid:255b2dd6-7cc6-46c7-9328-d0c8736c3451,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4\"" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.347 [INFO][4622] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0 csi-node-driver- calico-system 78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc 753 0 2025-01-15 12:51:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-c63c213d7c csi-node-driver-rwl5w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali190e602f27f [] []}} ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.347 [INFO][4622] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.475 [INFO][4660] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" HandleID="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.640 [INFO][4660] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" HandleID="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000618550), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-c63c213d7c", "pod":"csi-node-driver-rwl5w", "timestamp":"2025-01-15 12:51:54.475621798 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c63c213d7c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.640 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.808 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.809 [INFO][4660] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c63c213d7c' Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.816 [INFO][4660] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.846 [INFO][4660] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.868 [INFO][4660] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.874 [INFO][4660] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.881 [INFO][4660] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.881 [INFO][4660] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.887 [INFO][4660] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59 Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.904 [INFO][4660] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.922 [INFO][4660] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.68/26] block=192.168.12.64/26 handle="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.922 [INFO][4660] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.68/26] handle="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.922 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:54.996188 containerd[1712]: 2025-01-15 12:51:54.922 [INFO][4660] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.68/26] IPv6=[] ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" HandleID="k8s-pod-network.58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:54.998796 containerd[1712]: 2025-01-15 12:51:54.932 [INFO][4622] cni-plugin/k8s.go 386: Populated endpoint ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"", Pod:"csi-node-driver-rwl5w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190e602f27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.998796 containerd[1712]: 2025-01-15 12:51:54.934 [INFO][4622] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.68/32] ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:54.998796 containerd[1712]: 2025-01-15 12:51:54.934 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali190e602f27f ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:54.998796 containerd[1712]: 2025-01-15 12:51:54.943 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:54.998796 containerd[1712]: 2025-01-15 12:51:54.952 [INFO][4622] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59", Pod:"csi-node-driver-rwl5w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190e602f27f", MAC:"b6:a5:37:8d:68:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:54.998796 containerd[1712]: 2025-01-15 12:51:54.978 [INFO][4622] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59" Namespace="calico-system" Pod="csi-node-driver-rwl5w" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:51:55.000948 systemd[1]: Started cri-containerd-48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2.scope - libcontainer container 48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2. Jan 15 12:51:55.011389 containerd[1712]: time="2025-01-15T12:51:55.011338287Z" level=info msg="CreateContainer within sandbox \"4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:51:55.041712 systemd-networkd[1338]: cali6a073fbf3d9: Link UP Jan 15 12:51:55.043073 systemd-networkd[1338]: cali6a073fbf3d9: Gained carrier Jan 15 12:51:55.069549 containerd[1712]: time="2025-01-15T12:51:55.068998002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:55.069549 containerd[1712]: time="2025-01-15T12:51:55.069054522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:55.069549 containerd[1712]: time="2025-01-15T12:51:55.069066162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:55.069549 containerd[1712]: time="2025-01-15T12:51:55.069150321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:55.076791 containerd[1712]: time="2025-01-15T12:51:55.076737790Z" level=info msg="CreateContainer within sandbox \"4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f3bbcf55aed3adc9bc36765845df4a87b0d8740b0653ed1c0757e360d021cf1\"" Jan 15 12:51:55.079599 containerd[1712]: time="2025-01-15T12:51:55.079288146Z" level=info msg="StartContainer for \"5f3bbcf55aed3adc9bc36765845df4a87b0d8740b0653ed1c0757e360d021cf1\"" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.338 [INFO][4595] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0 calico-kube-controllers-67656bbdb8- calico-system 9db81a82-d2ca-4841-8eb8-517bd155d320 752 0 2025-01-15 12:51:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67656bbdb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-c63c213d7c calico-kube-controllers-67656bbdb8-tghv9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6a073fbf3d9 [] []}} ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.340 [INFO][4595] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.488 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" HandleID="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.639 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" HandleID="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cb10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-c63c213d7c", "pod":"calico-kube-controllers-67656bbdb8-tghv9", "timestamp":"2025-01-15 12:51:54.488516938 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c63c213d7c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.640 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.923 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.924 [INFO][4661] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c63c213d7c' Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.929 [INFO][4661] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.961 [INFO][4661] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.981 [INFO][4661] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.985 [INFO][4661] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.988 [INFO][4661] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.989 [INFO][4661] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:54.992 [INFO][4661] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2 Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:55.000 [INFO][4661] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:55.020 [INFO][4661] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.69/26] block=192.168.12.64/26 handle="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:55.021 [INFO][4661] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.69/26] handle="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:55.021 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:51:55.086218 containerd[1712]: 2025-01-15 12:51:55.021 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.69/26] IPv6=[] ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" HandleID="k8s-pod-network.2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:55.087305 containerd[1712]: 2025-01-15 12:51:55.028 [INFO][4595] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0", GenerateName:"calico-kube-controllers-67656bbdb8-", Namespace:"calico-system", SelfLink:"", UID:"9db81a82-d2ca-4841-8eb8-517bd155d320", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67656bbdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"", Pod:"calico-kube-controllers-67656bbdb8-tghv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a073fbf3d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:55.087305 containerd[1712]: 2025-01-15 12:51:55.028 [INFO][4595] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.69/32] ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:55.087305 containerd[1712]: 2025-01-15 12:51:55.028 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a073fbf3d9 ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:55.087305 containerd[1712]: 2025-01-15 12:51:55.043 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:55.087305 containerd[1712]: 2025-01-15 12:51:55.044 [INFO][4595] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0", GenerateName:"calico-kube-controllers-67656bbdb8-", Namespace:"calico-system", SelfLink:"", UID:"9db81a82-d2ca-4841-8eb8-517bd155d320", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67656bbdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2", Pod:"calico-kube-controllers-67656bbdb8-tghv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a073fbf3d9", MAC:"d2:a7:53:73:d2:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:51:55.087305 containerd[1712]: 2025-01-15 12:51:55.080 [INFO][4595] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2" Namespace="calico-system" Pod="calico-kube-controllers-67656bbdb8-tghv9" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:51:55.096086 systemd-networkd[1338]: vxlan.calico: Gained IPv6LL Jan 15 12:51:55.098972 containerd[1712]: time="2025-01-15T12:51:55.098681038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mgzgv,Uid:29ce35b0-776c-47e4-8693-df783bd5b593,Namespace:kube-system,Attempt:1,} returns sandbox id \"48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2\"" Jan 15 12:51:55.109141 containerd[1712]: time="2025-01-15T12:51:55.108046904Z" level=info msg="CreateContainer within sandbox \"48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 12:51:55.116367 systemd[1]: Started cri-containerd-58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59.scope - libcontainer container 58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59. Jan 15 12:51:55.146981 systemd[1]: Started cri-containerd-5f3bbcf55aed3adc9bc36765845df4a87b0d8740b0653ed1c0757e360d021cf1.scope - libcontainer container 5f3bbcf55aed3adc9bc36765845df4a87b0d8740b0653ed1c0757e360d021cf1. Jan 15 12:51:55.170275 containerd[1712]: time="2025-01-15T12:51:55.168101615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:51:55.171593 containerd[1712]: time="2025-01-15T12:51:55.170000693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:51:55.171593 containerd[1712]: time="2025-01-15T12:51:55.170425612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:55.173146 containerd[1712]: time="2025-01-15T12:51:55.173025168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:51:55.195867 containerd[1712]: time="2025-01-15T12:51:55.195523895Z" level=info msg="CreateContainer within sandbox \"48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f40d8a2aff51f4a6daa029be20b94ff61cf117995fea52c239a99afc015422a8\"" Jan 15 12:51:55.203388 containerd[1712]: time="2025-01-15T12:51:55.201391486Z" level=info msg="StartContainer for \"f40d8a2aff51f4a6daa029be20b94ff61cf117995fea52c239a99afc015422a8\"" Jan 15 12:51:55.226927 containerd[1712]: time="2025-01-15T12:51:55.226890929Z" level=info msg="StartContainer for \"5f3bbcf55aed3adc9bc36765845df4a87b0d8740b0653ed1c0757e360d021cf1\" returns successfully" Jan 15 12:51:55.228453 containerd[1712]: time="2025-01-15T12:51:55.228323086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwl5w,Uid:78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc,Namespace:calico-system,Attempt:1,} returns sandbox id \"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59\"" Jan 15 12:51:55.243326 systemd[1]: Started cri-containerd-2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2.scope - libcontainer container 2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2. Jan 15 12:51:55.290164 systemd[1]: Started cri-containerd-f40d8a2aff51f4a6daa029be20b94ff61cf117995fea52c239a99afc015422a8.scope - libcontainer container f40d8a2aff51f4a6daa029be20b94ff61cf117995fea52c239a99afc015422a8. Jan 15 12:51:55.333986 containerd[1712]: time="2025-01-15T12:51:55.333920531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67656bbdb8-tghv9,Uid:9db81a82-d2ca-4841-8eb8-517bd155d320,Namespace:calico-system,Attempt:1,} returns sandbox id \"2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2\"" Jan 15 12:51:55.354287 containerd[1712]: time="2025-01-15T12:51:55.354243781Z" level=info msg="StartContainer for \"f40d8a2aff51f4a6daa029be20b94ff61cf117995fea52c239a99afc015422a8\" returns successfully" Jan 15 12:51:55.864054 systemd-networkd[1338]: cali7b08768c81c: Gained IPv6LL Jan 15 12:51:55.992114 systemd-networkd[1338]: calic6cb19c0bc1: Gained IPv6LL Jan 15 12:51:56.184107 systemd-networkd[1338]: calic0f80cb67ae: Gained IPv6LL Jan 15 12:51:56.343040 kubelet[3164]: I0115 12:51:56.341526 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mgzgv" podStartSLOduration=41.341508323 podStartE2EDuration="41.341508323s" podCreationTimestamp="2025-01-15 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:56.339649326 +0000 UTC m=+46.357342555" watchObservedRunningTime="2025-01-15 12:51:56.341508323 +0000 UTC m=+46.359201552" Jan 15 12:51:56.343040 kubelet[3164]: I0115 12:51:56.342146 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8mfwq" podStartSLOduration=41.342137682 podStartE2EDuration="41.342137682s" podCreationTimestamp="2025-01-15 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:51:55.315237878 +0000 UTC m=+45.332931107" watchObservedRunningTime="2025-01-15 12:51:56.342137682 +0000 UTC m=+46.359830871" Jan 15 12:51:56.568096 systemd-networkd[1338]: cali190e602f27f: Gained IPv6LL Jan 15 12:51:56.760211 systemd-networkd[1338]: cali6a073fbf3d9: Gained IPv6LL Jan 15 12:51:57.353282 containerd[1712]: time="2025-01-15T12:51:57.353229350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:57.355624 containerd[1712]: time="2025-01-15T12:51:57.355585906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 15 12:51:57.359682 containerd[1712]: time="2025-01-15T12:51:57.359619300Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:57.365068 containerd[1712]: time="2025-01-15T12:51:57.365015932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:57.366342 containerd[1712]: time="2025-01-15T12:51:57.366176331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.46578712s" Jan 15 12:51:57.366342 containerd[1712]: time="2025-01-15T12:51:57.366252531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 15 12:51:57.367858 containerd[1712]: time="2025-01-15T12:51:57.367613009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 15 12:51:57.369437 containerd[1712]: time="2025-01-15T12:51:57.369249646Z" level=info msg="CreateContainer within sandbox \"e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:51:57.404275 containerd[1712]: time="2025-01-15T12:51:57.404190635Z" level=info msg="CreateContainer within sandbox \"e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e473837af920b73801c482468b78af2fc3e91f971ff0dad319b00bfaf7aec526\"" Jan 15 12:51:57.405355 containerd[1712]: time="2025-01-15T12:51:57.404767154Z" level=info msg="StartContainer for \"e473837af920b73801c482468b78af2fc3e91f971ff0dad319b00bfaf7aec526\"" Jan 15 12:51:57.447179 systemd[1]: Started cri-containerd-e473837af920b73801c482468b78af2fc3e91f971ff0dad319b00bfaf7aec526.scope - libcontainer container e473837af920b73801c482468b78af2fc3e91f971ff0dad319b00bfaf7aec526. Jan 15 12:51:57.498746 containerd[1712]: time="2025-01-15T12:51:57.498615495Z" level=info msg="StartContainer for \"e473837af920b73801c482468b78af2fc3e91f971ff0dad319b00bfaf7aec526\" returns successfully" Jan 15 12:51:58.361723 kubelet[3164]: I0115 12:51:58.360844 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fff95f6db-dsg6r" podStartSLOduration=32.892307706 podStartE2EDuration="35.360830462s" podCreationTimestamp="2025-01-15 12:51:23 +0000 UTC" firstStartedPulling="2025-01-15 12:51:54.898564653 +0000 UTC m=+44.916257882" lastFinishedPulling="2025-01-15 12:51:57.367087409 +0000 UTC m=+47.384780638" observedRunningTime="2025-01-15 12:51:58.360258823 +0000 UTC m=+48.377952052" watchObservedRunningTime="2025-01-15 12:51:58.360830462 +0000 UTC m=+48.378523691" Jan 15 12:51:58.738230 containerd[1712]: time="2025-01-15T12:51:58.737971186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:58.740799 containerd[1712]: time="2025-01-15T12:51:58.740660942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 15 12:51:58.743969 containerd[1712]: time="2025-01-15T12:51:58.743386578Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:58.749080 containerd[1712]: time="2025-01-15T12:51:58.749037049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:51:58.749892 containerd[1712]: time="2025-01-15T12:51:58.749861128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.382221439s" Jan 15 12:51:58.750010 containerd[1712]: time="2025-01-15T12:51:58.749990008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 15 12:51:58.758913 containerd[1712]: time="2025-01-15T12:51:58.758878555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 15 12:51:58.788083 containerd[1712]: time="2025-01-15T12:51:58.788017672Z" level=info msg="CreateContainer within sandbox \"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 15 12:51:58.827144 containerd[1712]: time="2025-01-15T12:51:58.827096174Z" level=info msg="CreateContainer within sandbox \"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ad3368c904ec60f91c62857525e991f94ef8877923005855e75bb69cbbe8fbb\"" Jan 15 12:51:58.830328 containerd[1712]: time="2025-01-15T12:51:58.830291850Z" level=info msg="StartContainer for \"5ad3368c904ec60f91c62857525e991f94ef8877923005855e75bb69cbbe8fbb\"" Jan 15 12:51:58.857142 systemd[1]: Started cri-containerd-5ad3368c904ec60f91c62857525e991f94ef8877923005855e75bb69cbbe8fbb.scope - libcontainer container 5ad3368c904ec60f91c62857525e991f94ef8877923005855e75bb69cbbe8fbb. Jan 15 12:51:58.921402 containerd[1712]: time="2025-01-15T12:51:58.921195035Z" level=info msg="StartContainer for \"5ad3368c904ec60f91c62857525e991f94ef8877923005855e75bb69cbbe8fbb\" returns successfully" Jan 15 12:51:59.360281 kubelet[3164]: I0115 12:51:59.360237 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:52:00.585850 containerd[1712]: time="2025-01-15T12:52:00.585791258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:00.588247 containerd[1712]: time="2025-01-15T12:52:00.588203615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 15 12:52:00.591985 containerd[1712]: time="2025-01-15T12:52:00.591922769Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:00.596366 containerd[1712]: time="2025-01-15T12:52:00.596304643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:00.597182 containerd[1712]: time="2025-01-15T12:52:00.597063282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.837643408s" Jan 15 12:52:00.597182 containerd[1712]: time="2025-01-15T12:52:00.597096961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 15 12:52:00.598876 containerd[1712]: time="2025-01-15T12:52:00.598708479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 15 12:52:00.621328 containerd[1712]: time="2025-01-15T12:52:00.621063126Z" level=info msg="CreateContainer within sandbox \"2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 15 12:52:00.656827 containerd[1712]: time="2025-01-15T12:52:00.656777514Z" level=info msg="CreateContainer within sandbox \"2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d7a20133eb66e14ce00d6c8ef3283700e2a1b3afa88ec77e7f7176ee914fdc95\"" Jan 15 12:52:00.657501 containerd[1712]: time="2025-01-15T12:52:00.657432793Z" level=info msg="StartContainer for \"d7a20133eb66e14ce00d6c8ef3283700e2a1b3afa88ec77e7f7176ee914fdc95\"" Jan 15 12:52:00.688172 systemd[1]: Started cri-containerd-d7a20133eb66e14ce00d6c8ef3283700e2a1b3afa88ec77e7f7176ee914fdc95.scope - libcontainer container d7a20133eb66e14ce00d6c8ef3283700e2a1b3afa88ec77e7f7176ee914fdc95. Jan 15 12:52:00.725103 containerd[1712]: time="2025-01-15T12:52:00.725041697Z" level=info msg="StartContainer for \"d7a20133eb66e14ce00d6c8ef3283700e2a1b3afa88ec77e7f7176ee914fdc95\" returns successfully" Jan 15 12:52:01.378872 kubelet[3164]: I0115 12:52:01.378807 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67656bbdb8-tghv9" podStartSLOduration=32.117198818 podStartE2EDuration="37.378790891s" podCreationTimestamp="2025-01-15 12:51:24 +0000 UTC" firstStartedPulling="2025-01-15 12:51:55.336480807 +0000 UTC m=+45.354174036" lastFinishedPulling="2025-01-15 12:52:00.59807288 +0000 UTC m=+50.615766109" observedRunningTime="2025-01-15 12:52:01.377701773 +0000 UTC m=+51.395394962" watchObservedRunningTime="2025-01-15 12:52:01.378790891 +0000 UTC m=+51.396484080" Jan 15 12:52:01.884483 containerd[1712]: time="2025-01-15T12:52:01.884422214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:01.886574 containerd[1712]: time="2025-01-15T12:52:01.886533451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 15 12:52:01.891033 containerd[1712]: time="2025-01-15T12:52:01.890981005Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:01.897051 containerd[1712]: time="2025-01-15T12:52:01.896993237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 12:52:01.898229 containerd[1712]: time="2025-01-15T12:52:01.897960435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.299191876s" Jan 15 12:52:01.898229 containerd[1712]: time="2025-01-15T12:52:01.897994435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 15 12:52:01.901252 containerd[1712]: time="2025-01-15T12:52:01.901176671Z" level=info msg="CreateContainer within sandbox \"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 15 12:52:01.947236 containerd[1712]: time="2025-01-15T12:52:01.947178766Z" level=info msg="CreateContainer within sandbox \"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a7d913f7a77f87cf3232b087d9197a3c69e2c636b001ee8d38f277e0cc36c2d0\"" Jan 15 12:52:01.949480 containerd[1712]: time="2025-01-15T12:52:01.947816805Z" level=info msg="StartContainer for \"a7d913f7a77f87cf3232b087d9197a3c69e2c636b001ee8d38f277e0cc36c2d0\"" Jan 15 12:52:01.977004 systemd[1]: run-containerd-runc-k8s.io-a7d913f7a77f87cf3232b087d9197a3c69e2c636b001ee8d38f277e0cc36c2d0-runc.72yrTE.mount: Deactivated successfully. Jan 15 12:52:01.983116 systemd[1]: Started cri-containerd-a7d913f7a77f87cf3232b087d9197a3c69e2c636b001ee8d38f277e0cc36c2d0.scope - libcontainer container a7d913f7a77f87cf3232b087d9197a3c69e2c636b001ee8d38f277e0cc36c2d0. Jan 15 12:52:02.013107 containerd[1712]: time="2025-01-15T12:52:02.012886752Z" level=info msg="StartContainer for \"a7d913f7a77f87cf3232b087d9197a3c69e2c636b001ee8d38f277e0cc36c2d0\" returns successfully" Jan 15 12:52:02.194412 kubelet[3164]: I0115 12:52:02.194302 3164 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 15 12:52:02.194412 kubelet[3164]: I0115 12:52:02.194349 3164 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 15 12:52:02.434300 kubelet[3164]: I0115 12:52:02.433798 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rwl5w" podStartSLOduration=31.766157204 podStartE2EDuration="38.433779836s" podCreationTimestamp="2025-01-15 12:51:24 +0000 UTC" firstStartedPulling="2025-01-15 12:51:55.231286242 +0000 UTC m=+45.248979471" lastFinishedPulling="2025-01-15 12:52:01.898908874 +0000 UTC m=+51.916602103" observedRunningTime="2025-01-15 12:52:02.392556214 +0000 UTC m=+52.410249443" watchObservedRunningTime="2025-01-15 12:52:02.433779836 +0000 UTC m=+52.451473065" Jan 15 12:52:03.068950 containerd[1712]: time="2025-01-15T12:52:03.068876296Z" level=info msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\"" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.115 [INFO][5302] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.115 [INFO][5302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" iface="eth0" netns="/var/run/netns/cni-379ca35c-7c45-719b-db29-bb046c61ec31" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.115 [INFO][5302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" iface="eth0" netns="/var/run/netns/cni-379ca35c-7c45-719b-db29-bb046c61ec31" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.116 [INFO][5302] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" iface="eth0" netns="/var/run/netns/cni-379ca35c-7c45-719b-db29-bb046c61ec31" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.116 [INFO][5302] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.116 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.137 [INFO][5308] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.137 [INFO][5308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.137 [INFO][5308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.148 [WARNING][5308] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.149 [INFO][5308] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.150 [INFO][5308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:03.153791 containerd[1712]: 2025-01-15 12:52:03.151 [INFO][5302] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:03.154417 containerd[1712]: time="2025-01-15T12:52:03.154061695Z" level=info msg="TearDown network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" successfully" Jan 15 12:52:03.154417 containerd[1712]: time="2025-01-15T12:52:03.154091575Z" level=info msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" returns successfully" Jan 15 12:52:03.155087 containerd[1712]: time="2025-01-15T12:52:03.154720174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-mwpbr,Uid:b559d165-5f82-41f6-b8f0-2a1034da5c7c,Namespace:calico-apiserver,Attempt:1,}" Jan 15 12:52:03.157924 systemd[1]: run-netns-cni\x2d379ca35c\x2d7c45\x2d719b\x2ddb29\x2dbb046c61ec31.mount: Deactivated successfully. Jan 15 12:52:03.305146 systemd-networkd[1338]: calif5e481643e9: Link UP Jan 15 12:52:03.305288 systemd-networkd[1338]: calif5e481643e9: Gained carrier Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.231 [INFO][5319] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0 calico-apiserver-5fff95f6db- calico-apiserver b559d165-5f82-41f6-b8f0-2a1034da5c7c 844 0 2025-01-15 12:51:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fff95f6db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-c63c213d7c calico-apiserver-5fff95f6db-mwpbr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif5e481643e9 [] []}} ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.231 [INFO][5319] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.255 [INFO][5326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" HandleID="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.266 [INFO][5326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" HandleID="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-c63c213d7c", "pod":"calico-apiserver-5fff95f6db-mwpbr", "timestamp":"2025-01-15 12:52:03.255821631 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c63c213d7c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.266 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.266 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.266 [INFO][5326] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c63c213d7c' Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.268 [INFO][5326] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.271 [INFO][5326] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.275 [INFO][5326] ipam/ipam.go 489: Trying affinity for 192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.277 [INFO][5326] ipam/ipam.go 155: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.279 [INFO][5326] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.279 [INFO][5326] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.286 [INFO][5326] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.293 [INFO][5326] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.299 [INFO][5326] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.12.70/26] block=192.168.12.64/26 handle="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.299 [INFO][5326] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.12.70/26] handle="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" host="ci-4081.3.0-a-c63c213d7c" Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.299 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:03.323173 containerd[1712]: 2025-01-15 12:52:03.299 [INFO][5326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.12.70/26] IPv6=[] ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" HandleID="k8s-pod-network.b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.326155 containerd[1712]: 2025-01-15 12:52:03.301 [INFO][5319] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b559d165-5f82-41f6-b8f0-2a1034da5c7c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"", Pod:"calico-apiserver-5fff95f6db-mwpbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5e481643e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:03.326155 containerd[1712]: 2025-01-15 12:52:03.302 [INFO][5319] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.12.70/32] ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.326155 containerd[1712]: 2025-01-15 12:52:03.302 [INFO][5319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5e481643e9 ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.326155 containerd[1712]: 2025-01-15 12:52:03.303 [INFO][5319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.326155 containerd[1712]: 2025-01-15 12:52:03.304 [INFO][5319] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b559d165-5f82-41f6-b8f0-2a1034da5c7c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad", Pod:"calico-apiserver-5fff95f6db-mwpbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5e481643e9", MAC:"da:17:cc:84:95:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:03.326155 containerd[1712]: 2025-01-15 12:52:03.320 [INFO][5319] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad" Namespace="calico-apiserver" Pod="calico-apiserver-5fff95f6db-mwpbr" WorkloadEndpoint="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:03.349758 containerd[1712]: time="2025-01-15T12:52:03.349641218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 15 12:52:03.349758 containerd[1712]: time="2025-01-15T12:52:03.349701778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 15 12:52:03.349758 containerd[1712]: time="2025-01-15T12:52:03.349712298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:03.350119 containerd[1712]: time="2025-01-15T12:52:03.349789498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 15 12:52:03.369149 systemd[1]: Started cri-containerd-b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad.scope - libcontainer container b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad. Jan 15 12:52:03.400105 containerd[1712]: time="2025-01-15T12:52:03.400034107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fff95f6db-mwpbr,Uid:b559d165-5f82-41f6-b8f0-2a1034da5c7c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad\"" Jan 15 12:52:03.403195 containerd[1712]: time="2025-01-15T12:52:03.403157822Z" level=info msg="CreateContainer within sandbox \"b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 15 12:52:03.438967 containerd[1712]: time="2025-01-15T12:52:03.438770372Z" level=info msg="CreateContainer within sandbox \"b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"be97f5331c4b9bdc98cd1dd70818b33b49db2d9248d8e35b5ca729ba35872c89\"" Jan 15 12:52:03.440058 containerd[1712]: time="2025-01-15T12:52:03.439324971Z" level=info msg="StartContainer for \"be97f5331c4b9bdc98cd1dd70818b33b49db2d9248d8e35b5ca729ba35872c89\"" Jan 15 12:52:03.470135 systemd[1]: Started cri-containerd-be97f5331c4b9bdc98cd1dd70818b33b49db2d9248d8e35b5ca729ba35872c89.scope - libcontainer container be97f5331c4b9bdc98cd1dd70818b33b49db2d9248d8e35b5ca729ba35872c89. Jan 15 12:52:03.505837 containerd[1712]: time="2025-01-15T12:52:03.505784197Z" level=info msg="StartContainer for \"be97f5331c4b9bdc98cd1dd70818b33b49db2d9248d8e35b5ca729ba35872c89\" returns successfully" Jan 15 12:52:04.395015 kubelet[3164]: I0115 12:52:04.394275 3164 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fff95f6db-mwpbr" podStartSLOduration=41.394257098 podStartE2EDuration="41.394257098s" podCreationTimestamp="2025-01-15 12:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-15 12:52:04.3930559 +0000 UTC m=+54.410749129" watchObservedRunningTime="2025-01-15 12:52:04.394257098 +0000 UTC m=+54.411950287" Jan 15 12:52:04.505175 systemd-networkd[1338]: calif5e481643e9: Gained IPv6LL Jan 15 12:52:10.078411 containerd[1712]: time="2025-01-15T12:52:10.078375623Z" level=info msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\"" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.113 [WARNING][5475] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b559d165-5f82-41f6-b8f0-2a1034da5c7c", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad", Pod:"calico-apiserver-5fff95f6db-mwpbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5e481643e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.114 [INFO][5475] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.114 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" iface="eth0" netns="" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.114 [INFO][5475] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.114 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.133 [INFO][5481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.134 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.134 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.142 [WARNING][5481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.142 [INFO][5481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.145 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.149806 containerd[1712]: 2025-01-15 12:52:10.147 [INFO][5475] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.149806 containerd[1712]: time="2025-01-15T12:52:10.149002274Z" level=info msg="TearDown network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" successfully" Jan 15 12:52:10.149806 containerd[1712]: time="2025-01-15T12:52:10.149027234Z" level=info msg="StopPodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" returns successfully" Jan 15 12:52:10.149806 containerd[1712]: time="2025-01-15T12:52:10.149553633Z" level=info msg="RemovePodSandbox for \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\"" Jan 15 12:52:10.149806 containerd[1712]: time="2025-01-15T12:52:10.149584393Z" level=info msg="Forcibly stopping sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\"" Jan 15 12:52:10.214894 kubelet[3164]: I0115 12:52:10.214840 3164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.189 [WARNING][5499] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b559d165-5f82-41f6-b8f0-2a1034da5c7c", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"b0cda607e65c965a26f4a8b90f87a00d6a11a251e05588bd6fa8dd2154431dad", Pod:"calico-apiserver-5fff95f6db-mwpbr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5e481643e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.190 [INFO][5499] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.190 [INFO][5499] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" iface="eth0" netns="" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.190 [INFO][5499] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.190 [INFO][5499] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.209 [INFO][5505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.209 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.209 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.220 [WARNING][5505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.220 [INFO][5505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" HandleID="k8s-pod-network.14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--mwpbr-eth0" Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.221 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.224610 containerd[1712]: 2025-01-15 12:52:10.223 [INFO][5499] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798" Jan 15 12:52:10.225017 containerd[1712]: time="2025-01-15T12:52:10.224668437Z" level=info msg="TearDown network for sandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" successfully" Jan 15 12:52:10.236718 containerd[1712]: time="2025-01-15T12:52:10.236510339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:52:10.236718 containerd[1712]: time="2025-01-15T12:52:10.236588619Z" level=info msg="RemovePodSandbox \"14950bd5f60be77674ad0e7c8a8f72990d12688c50646a20329300d489365798\" returns successfully" Jan 15 12:52:10.237708 containerd[1712]: time="2025-01-15T12:52:10.237454137Z" level=info msg="StopPodSandbox for \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\"" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.331 [WARNING][5524] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"255b2dd6-7cc6-46c7-9328-d0c8736c3451", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4", Pod:"coredns-6f6b679f8f-8mfwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b08768c81c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.332 [INFO][5524] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.332 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" iface="eth0" netns="" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.332 [INFO][5524] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.332 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.352 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.353 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.353 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.361 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.361 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.363 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.365991 containerd[1712]: 2025-01-15 12:52:10.364 [INFO][5524] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.366580 containerd[1712]: time="2025-01-15T12:52:10.366459058Z" level=info msg="TearDown network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\" successfully" Jan 15 12:52:10.366580 containerd[1712]: time="2025-01-15T12:52:10.366489778Z" level=info msg="StopPodSandbox for \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\" returns successfully" Jan 15 12:52:10.367702 containerd[1712]: time="2025-01-15T12:52:10.367248977Z" level=info msg="RemovePodSandbox for \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\"" Jan 15 12:52:10.367702 containerd[1712]: time="2025-01-15T12:52:10.367293657Z" level=info msg="Forcibly stopping sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\"" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.407 [WARNING][5552] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"255b2dd6-7cc6-46c7-9328-d0c8736c3451", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"4c91b25282b7d7006889b2b301e59bab376fa28803d9046e4a3aa781e1c0e4f4", Pod:"coredns-6f6b679f8f-8mfwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b08768c81c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.407 [INFO][5552] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.407 [INFO][5552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" iface="eth0" netns="" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.407 [INFO][5552] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.407 [INFO][5552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.427 [INFO][5558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.427 [INFO][5558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.427 [INFO][5558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.435 [WARNING][5558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.435 [INFO][5558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" HandleID="k8s-pod-network.35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--8mfwq-eth0" Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.437 [INFO][5558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.439519 containerd[1712]: 2025-01-15 12:52:10.438 [INFO][5552] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d" Jan 15 12:52:10.439950 containerd[1712]: time="2025-01-15T12:52:10.439563585Z" level=info msg="TearDown network for sandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\" successfully" Jan 15 12:52:10.452865 containerd[1712]: time="2025-01-15T12:52:10.452805805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:52:10.453057 containerd[1712]: time="2025-01-15T12:52:10.452880965Z" level=info msg="RemovePodSandbox \"35eb7f582f7eff3f651f94efe7740ac038604399f2c77e8cb76dc0149203a64d\" returns successfully" Jan 15 12:52:10.453717 containerd[1712]: time="2025-01-15T12:52:10.453533124Z" level=info msg="StopPodSandbox for \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\"" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.488 [WARNING][5576] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0", GenerateName:"calico-kube-controllers-67656bbdb8-", Namespace:"calico-system", SelfLink:"", UID:"9db81a82-d2ca-4841-8eb8-517bd155d320", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67656bbdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2", Pod:"calico-kube-controllers-67656bbdb8-tghv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a073fbf3d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.488 [INFO][5576] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.488 [INFO][5576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" iface="eth0" netns="" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.488 [INFO][5576] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.488 [INFO][5576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.510 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.510 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.510 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.518 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.518 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.519 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.522839 containerd[1712]: 2025-01-15 12:52:10.521 [INFO][5576] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.523437 containerd[1712]: time="2025-01-15T12:52:10.523313096Z" level=info msg="TearDown network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\" successfully" Jan 15 12:52:10.523437 containerd[1712]: time="2025-01-15T12:52:10.523344776Z" level=info msg="StopPodSandbox for \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\" returns successfully" Jan 15 12:52:10.523805 containerd[1712]: time="2025-01-15T12:52:10.523775855Z" level=info msg="RemovePodSandbox for \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\"" Jan 15 12:52:10.523847 containerd[1712]: time="2025-01-15T12:52:10.523809415Z" level=info msg="Forcibly stopping sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\"" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.559 [WARNING][5600] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0", GenerateName:"calico-kube-controllers-67656bbdb8-", Namespace:"calico-system", SelfLink:"", UID:"9db81a82-d2ca-4841-8eb8-517bd155d320", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67656bbdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"2feebb8c7cccbb2caad59eb360e6a6b23801782e7d3e8999ad551eed142b61c2", Pod:"calico-kube-controllers-67656bbdb8-tghv9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a073fbf3d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.560 [INFO][5600] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.560 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" iface="eth0" netns="" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.560 [INFO][5600] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.560 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.581 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.582 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.582 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.589 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.590 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" HandleID="k8s-pod-network.6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--kube--controllers--67656bbdb8--tghv9-eth0" Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.591 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.594193 containerd[1712]: 2025-01-15 12:52:10.592 [INFO][5600] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4" Jan 15 12:52:10.594603 containerd[1712]: time="2025-01-15T12:52:10.594234187Z" level=info msg="TearDown network for sandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\" successfully" Jan 15 12:52:10.604821 containerd[1712]: time="2025-01-15T12:52:10.604773930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:52:10.604989 containerd[1712]: time="2025-01-15T12:52:10.604847370Z" level=info msg="RemovePodSandbox \"6a16aa502b8998129bea5c48ade2cf85f1f86419be417ab48a7bb306469d58c4\" returns successfully" Jan 15 12:52:10.605471 containerd[1712]: time="2025-01-15T12:52:10.605298690Z" level=info msg="StopPodSandbox for \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\"" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.640 [WARNING][5624] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"a02275ac-e608-4dc2-9a86-dab48a463c3a", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d", Pod:"calico-apiserver-5fff95f6db-dsg6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6cb19c0bc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.640 [INFO][5624] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.640 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" iface="eth0" netns="" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.640 [INFO][5624] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.640 [INFO][5624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.658 [INFO][5630] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.658 [INFO][5630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.658 [INFO][5630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.666 [WARNING][5630] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.666 [INFO][5630] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.669 [INFO][5630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.672084 containerd[1712]: 2025-01-15 12:52:10.670 [INFO][5624] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.672084 containerd[1712]: time="2025-01-15T12:52:10.671962347Z" level=info msg="TearDown network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\" successfully" Jan 15 12:52:10.672084 containerd[1712]: time="2025-01-15T12:52:10.671987947Z" level=info msg="StopPodSandbox for \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\" returns successfully" Jan 15 12:52:10.674032 containerd[1712]: time="2025-01-15T12:52:10.674000863Z" level=info msg="RemovePodSandbox for \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\"" Jan 15 12:52:10.674101 containerd[1712]: time="2025-01-15T12:52:10.674044183Z" level=info msg="Forcibly stopping sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\"" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.708 [WARNING][5648] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0", GenerateName:"calico-apiserver-5fff95f6db-", Namespace:"calico-apiserver", SelfLink:"", UID:"a02275ac-e608-4dc2-9a86-dab48a463c3a", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fff95f6db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"e2b3332c9ec40aff74a86fc11880210a3a84ca1de05ed4bece52e0500979d84d", Pod:"calico-apiserver-5fff95f6db-dsg6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6cb19c0bc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.709 [INFO][5648] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.709 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" iface="eth0" netns="" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.709 [INFO][5648] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.709 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.726 [INFO][5654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.726 [INFO][5654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.726 [INFO][5654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.735 [WARNING][5654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.735 [INFO][5654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" HandleID="k8s-pod-network.59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Workload="ci--4081.3.0--a--c63c213d7c-k8s-calico--apiserver--5fff95f6db--dsg6r-eth0" Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.736 [INFO][5654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.739589 containerd[1712]: 2025-01-15 12:52:10.738 [INFO][5648] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105" Jan 15 12:52:10.740181 containerd[1712]: time="2025-01-15T12:52:10.739611522Z" level=info msg="TearDown network for sandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\" successfully" Jan 15 12:52:10.749053 containerd[1712]: time="2025-01-15T12:52:10.748991868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:52:10.749149 containerd[1712]: time="2025-01-15T12:52:10.749084388Z" level=info msg="RemovePodSandbox \"59fc1ac5cfbf92634b4e780a25127a21293fe6b6c4476439c529b4ee3c7e2105\" returns successfully" Jan 15 12:52:10.749727 containerd[1712]: time="2025-01-15T12:52:10.749519947Z" level=info msg="StopPodSandbox for \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\"" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.783 [WARNING][5672] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"29ce35b0-776c-47e4-8693-df783bd5b593", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2", Pod:"coredns-6f6b679f8f-mgzgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0f80cb67ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.784 [INFO][5672] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.784 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" iface="eth0" netns="" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.784 [INFO][5672] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.784 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.806 [INFO][5678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.806 [INFO][5678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.806 [INFO][5678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.816 [WARNING][5678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.816 [INFO][5678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.817 [INFO][5678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.821583 containerd[1712]: 2025-01-15 12:52:10.820 [INFO][5672] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.823047 containerd[1712]: time="2025-01-15T12:52:10.821630356Z" level=info msg="TearDown network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\" successfully" Jan 15 12:52:10.823047 containerd[1712]: time="2025-01-15T12:52:10.821656675Z" level=info msg="StopPodSandbox for \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\" returns successfully" Jan 15 12:52:10.823047 containerd[1712]: time="2025-01-15T12:52:10.822604154Z" level=info msg="RemovePodSandbox for \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\"" Jan 15 12:52:10.823047 containerd[1712]: time="2025-01-15T12:52:10.822634994Z" level=info msg="Forcibly stopping sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\"" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.862 [WARNING][5696] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"29ce35b0-776c-47e4-8693-df783bd5b593", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"48841d478d7f9431ec7365bf6231c6ad5402104f8fc21f6f91b83b66fb91dac2", Pod:"coredns-6f6b679f8f-mgzgv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0f80cb67ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.862 [INFO][5696] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.862 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" iface="eth0" netns="" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.862 [INFO][5696] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.862 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.882 [INFO][5702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.882 [INFO][5702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.882 [INFO][5702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.890 [WARNING][5702] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.890 [INFO][5702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" HandleID="k8s-pod-network.b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Workload="ci--4081.3.0--a--c63c213d7c-k8s-coredns--6f6b679f8f--mgzgv-eth0" Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.891 [INFO][5702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:10.894754 containerd[1712]: 2025-01-15 12:52:10.893 [INFO][5696] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7" Jan 15 12:52:10.895299 containerd[1712]: time="2025-01-15T12:52:10.894779403Z" level=info msg="TearDown network for sandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\" successfully" Jan 15 12:52:11.908599 containerd[1712]: time="2025-01-15T12:52:11.908552277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:52:11.909395 containerd[1712]: time="2025-01-15T12:52:11.908836637Z" level=info msg="RemovePodSandbox \"b91df7fe3d3605968506f8c57d92ab1571811484549b6fbac5ecf2f05b0323b7\" returns successfully" Jan 15 12:52:11.909562 containerd[1712]: time="2025-01-15T12:52:11.909496956Z" level=info msg="StopPodSandbox for \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\"" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.944 [WARNING][5720] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59", Pod:"csi-node-driver-rwl5w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190e602f27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.944 [INFO][5720] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.944 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" iface="eth0" netns="" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.944 [INFO][5720] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.944 [INFO][5720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.964 [INFO][5726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.964 [INFO][5726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.964 [INFO][5726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.972 [WARNING][5726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.972 [INFO][5726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.974 [INFO][5726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:11.976663 containerd[1712]: 2025-01-15 12:52:11.975 [INFO][5720] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:11.977103 containerd[1712]: time="2025-01-15T12:52:11.976733212Z" level=info msg="TearDown network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\" successfully" Jan 15 12:52:11.977103 containerd[1712]: time="2025-01-15T12:52:11.976777972Z" level=info msg="StopPodSandbox for \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\" returns successfully" Jan 15 12:52:11.977389 containerd[1712]: time="2025-01-15T12:52:11.977363251Z" level=info msg="RemovePodSandbox for \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\"" Jan 15 12:52:11.977432 containerd[1712]: time="2025-01-15T12:52:11.977395531Z" level=info msg="Forcibly stopping sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\"" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.012 [WARNING][5744] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78c83d5e-cbb0-4ecd-a9c6-5aa6606ecbcc", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.January, 15, 12, 51, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c63c213d7c", ContainerID:"58b4459b31fe96be2a3bb375601071585a12a9dae5a996f52eedeb62b3f58d59", Pod:"csi-node-driver-rwl5w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali190e602f27f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.013 [INFO][5744] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.013 [INFO][5744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" iface="eth0" netns="" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.013 [INFO][5744] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.013 [INFO][5744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.031 [INFO][5751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.031 [INFO][5751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.031 [INFO][5751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.039 [WARNING][5751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.039 [INFO][5751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" HandleID="k8s-pod-network.863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Workload="ci--4081.3.0--a--c63c213d7c-k8s-csi--node--driver--rwl5w-eth0" Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.041 [INFO][5751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 15 12:52:12.043904 containerd[1712]: 2025-01-15 12:52:12.042 [INFO][5744] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245" Jan 15 12:52:12.044369 containerd[1712]: time="2025-01-15T12:52:12.043993948Z" level=info msg="TearDown network for sandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\" successfully" Jan 15 12:52:12.159401 containerd[1712]: time="2025-01-15T12:52:12.159024891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 15 12:52:12.159401 containerd[1712]: time="2025-01-15T12:52:12.159106131Z" level=info msg="RemovePodSandbox \"863e193e11813ee36830af6aa3c6d2cd9442246bf7be0a014c9d9881de157245\" returns successfully" Jan 15 12:52:24.080916 systemd[1]: run-containerd-runc-k8s.io-cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329-runc.u0d1Qb.mount: Deactivated successfully. Jan 15 12:53:24.090765 systemd[1]: run-containerd-runc-k8s.io-cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329-runc.qgBjde.mount: Deactivated successfully. Jan 15 12:53:37.631594 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:42540.service - OpenSSH per-connection server daemon (10.200.16.10:42540). Jan 15 12:53:38.096957 sshd[5957]: Accepted publickey for core from 10.200.16.10 port 42540 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:38.099410 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:38.104086 systemd-logind[1685]: New session 10 of user core. Jan 15 12:53:38.110091 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 12:53:38.513175 sshd[5957]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:38.516988 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:42540.service: Deactivated successfully. Jan 15 12:53:38.517440 systemd-logind[1685]: Session 10 logged out. Waiting for processes to exit. Jan 15 12:53:38.519087 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 12:53:38.519861 systemd-logind[1685]: Removed session 10. Jan 15 12:53:43.599636 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:42548.service - OpenSSH per-connection server daemon (10.200.16.10:42548). Jan 15 12:53:44.072311 sshd[5995]: Accepted publickey for core from 10.200.16.10 port 42548 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:44.073976 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:44.077963 systemd-logind[1685]: New session 11 of user core. Jan 15 12:53:44.086075 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 12:53:44.402151 update_engine[1688]: I20250115 12:53:44.402015 1688 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 15 12:53:44.402151 update_engine[1688]: I20250115 12:53:44.402065 1688 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 15 12:53:44.402491 update_engine[1688]: I20250115 12:53:44.402266 1688 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 15 12:53:44.402762 update_engine[1688]: I20250115 12:53:44.402611 1688 omaha_request_params.cc:62] Current group set to lts Jan 15 12:53:44.402762 update_engine[1688]: I20250115 12:53:44.402704 1688 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 15 12:53:44.402762 update_engine[1688]: I20250115 12:53:44.402711 1688 update_attempter.cc:643] Scheduling an action processor start. Jan 15 12:53:44.402762 update_engine[1688]: I20250115 12:53:44.402726 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 12:53:44.402762 update_engine[1688]: I20250115 12:53:44.402755 1688 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 15 12:53:44.402892 update_engine[1688]: I20250115 12:53:44.402798 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 12:53:44.402892 update_engine[1688]: I20250115 12:53:44.402806 1688 omaha_request_action.cc:272] Request: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: Jan 15 12:53:44.402892 update_engine[1688]: I20250115 12:53:44.402812 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:53:44.404604 locksmithd[1773]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 15 12:53:44.405875 update_engine[1688]: I20250115 12:53:44.405705 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:53:44.406027 update_engine[1688]: I20250115 12:53:44.405995 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:53:44.469177 sshd[5995]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:44.472770 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:42548.service: Deactivated successfully. Jan 15 12:53:44.474974 systemd-logind[1685]: Session 11 logged out. Waiting for processes to exit. Jan 15 12:53:44.475133 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 12:53:44.476566 systemd-logind[1685]: Removed session 11. Jan 15 12:53:44.698813 update_engine[1688]: E20250115 12:53:44.698567 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:53:44.698813 update_engine[1688]: I20250115 12:53:44.698711 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 15 12:53:49.546582 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:58026.service - OpenSSH per-connection server daemon (10.200.16.10:58026). Jan 15 12:53:49.974931 sshd[6011]: Accepted publickey for core from 10.200.16.10 port 58026 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:49.976316 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:49.981537 systemd-logind[1685]: New session 12 of user core. Jan 15 12:53:49.988146 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 12:53:50.374917 sshd[6011]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:50.378797 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:58026.service: Deactivated successfully. Jan 15 12:53:50.380769 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 12:53:50.381633 systemd-logind[1685]: Session 12 logged out. Waiting for processes to exit. Jan 15 12:53:50.383015 systemd-logind[1685]: Removed session 12. Jan 15 12:53:50.458150 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:58042.service - OpenSSH per-connection server daemon (10.200.16.10:58042). Jan 15 12:53:50.893240 sshd[6025]: Accepted publickey for core from 10.200.16.10 port 58042 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:50.894964 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:50.902375 systemd-logind[1685]: New session 13 of user core. Jan 15 12:53:50.908107 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 12:53:51.316042 sshd[6025]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:51.319811 systemd-logind[1685]: Session 13 logged out. Waiting for processes to exit. Jan 15 12:53:51.319885 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:58042.service: Deactivated successfully. Jan 15 12:53:51.322694 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 12:53:51.324740 systemd-logind[1685]: Removed session 13. Jan 15 12:53:51.401139 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:58052.service - OpenSSH per-connection server daemon (10.200.16.10:58052). Jan 15 12:53:51.864100 sshd[6037]: Accepted publickey for core from 10.200.16.10 port 58052 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:51.865619 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:51.870154 systemd-logind[1685]: New session 14 of user core. Jan 15 12:53:51.872144 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 12:53:52.265626 sshd[6037]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:52.269592 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:58052.service: Deactivated successfully. Jan 15 12:53:52.271651 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 12:53:52.272985 systemd-logind[1685]: Session 14 logged out. Waiting for processes to exit. Jan 15 12:53:52.275073 systemd-logind[1685]: Removed session 14. Jan 15 12:53:55.403719 update_engine[1688]: I20250115 12:53:55.403184 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:53:55.403719 update_engine[1688]: I20250115 12:53:55.403425 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:53:55.403719 update_engine[1688]: I20250115 12:53:55.403662 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:53:55.442072 update_engine[1688]: E20250115 12:53:55.441908 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:53:55.442072 update_engine[1688]: I20250115 12:53:55.442040 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 15 12:53:57.353259 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:43236.service - OpenSSH per-connection server daemon (10.200.16.10:43236). Jan 15 12:53:57.822835 sshd[6070]: Accepted publickey for core from 10.200.16.10 port 43236 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:53:57.824383 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:53:57.828734 systemd-logind[1685]: New session 15 of user core. Jan 15 12:53:57.836103 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 12:53:58.231551 sshd[6070]: pam_unix(sshd:session): session closed for user core Jan 15 12:53:58.235118 systemd-logind[1685]: Session 15 logged out. Waiting for processes to exit. Jan 15 12:53:58.235768 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:43236.service: Deactivated successfully. Jan 15 12:53:58.238373 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 12:53:58.241400 systemd-logind[1685]: Removed session 15. Jan 15 12:54:03.310136 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:43248.service - OpenSSH per-connection server daemon (10.200.16.10:43248). Jan 15 12:54:03.738461 sshd[6083]: Accepted publickey for core from 10.200.16.10 port 43248 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:03.739781 sshd[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:03.743347 systemd-logind[1685]: New session 16 of user core. Jan 15 12:54:03.752281 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 12:54:04.111190 sshd[6083]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:04.114609 systemd-logind[1685]: Session 16 logged out. Waiting for processes to exit. Jan 15 12:54:04.115269 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:43248.service: Deactivated successfully. Jan 15 12:54:04.117123 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 12:54:04.118162 systemd-logind[1685]: Removed session 16. Jan 15 12:54:05.401960 update_engine[1688]: I20250115 12:54:05.401436 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:54:05.401960 update_engine[1688]: I20250115 12:54:05.401659 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:54:05.401960 update_engine[1688]: I20250115 12:54:05.401880 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:54:05.446469 update_engine[1688]: E20250115 12:54:05.446395 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:54:05.446602 update_engine[1688]: I20250115 12:54:05.446511 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 15 12:54:09.194869 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:59236.service - OpenSSH per-connection server daemon (10.200.16.10:59236). Jan 15 12:54:09.622005 sshd[6118]: Accepted publickey for core from 10.200.16.10 port 59236 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:09.623369 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:09.627990 systemd-logind[1685]: New session 17 of user core. Jan 15 12:54:09.634338 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 12:54:10.002553 sshd[6118]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:10.005392 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:59236.service: Deactivated successfully. Jan 15 12:54:10.007149 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 12:54:10.009704 systemd-logind[1685]: Session 17 logged out. Waiting for processes to exit. Jan 15 12:54:10.010723 systemd-logind[1685]: Removed session 17. Jan 15 12:54:15.080783 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:59240.service - OpenSSH per-connection server daemon (10.200.16.10:59240). Jan 15 12:54:15.406150 update_engine[1688]: I20250115 12:54:15.406008 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:54:15.406452 update_engine[1688]: I20250115 12:54:15.406292 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:54:15.406828 update_engine[1688]: I20250115 12:54:15.406546 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:54:15.507630 update_engine[1688]: E20250115 12:54:15.507576 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:54:15.507764 update_engine[1688]: I20250115 12:54:15.507655 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 12:54:15.507764 update_engine[1688]: I20250115 12:54:15.507663 1688 omaha_request_action.cc:617] Omaha request response: Jan 15 12:54:15.507764 update_engine[1688]: E20250115 12:54:15.507742 1688 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 15 12:54:15.507764 update_engine[1688]: I20250115 12:54:15.507759 1688 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 15 12:54:15.507866 update_engine[1688]: I20250115 12:54:15.507766 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 12:54:15.507866 update_engine[1688]: I20250115 12:54:15.507771 1688 update_attempter.cc:306] Processing Done. Jan 15 12:54:15.507866 update_engine[1688]: E20250115 12:54:15.507784 1688 update_attempter.cc:619] Update failed. Jan 15 12:54:15.507866 update_engine[1688]: I20250115 12:54:15.507789 1688 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 15 12:54:15.507866 update_engine[1688]: I20250115 12:54:15.507794 1688 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 15 12:54:15.507866 update_engine[1688]: I20250115 12:54:15.507799 1688 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 15 12:54:15.508092 update_engine[1688]: I20250115 12:54:15.507864 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 15 12:54:15.508092 update_engine[1688]: I20250115 12:54:15.507886 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 15 12:54:15.508092 update_engine[1688]: I20250115 12:54:15.507894 1688 omaha_request_action.cc:272] Request: Jan 15 12:54:15.508092 update_engine[1688]: Jan 15 12:54:15.508092 update_engine[1688]: Jan 15 12:54:15.508092 update_engine[1688]: Jan 15 12:54:15.508092 update_engine[1688]: Jan 15 12:54:15.508092 update_engine[1688]: Jan 15 12:54:15.508092 update_engine[1688]: Jan 15 12:54:15.508092 update_engine[1688]: I20250115 12:54:15.507898 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 15 12:54:15.508092 update_engine[1688]: I20250115 12:54:15.508056 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 15 12:54:15.508288 update_engine[1688]: I20250115 12:54:15.508265 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 15 12:54:15.508605 locksmithd[1773]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 15 12:54:15.512625 sshd[6132]: Accepted publickey for core from 10.200.16.10 port 59240 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:15.514139 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:15.518533 systemd-logind[1685]: New session 18 of user core. Jan 15 12:54:15.525114 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 12:54:15.544509 update_engine[1688]: E20250115 12:54:15.544452 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 15 12:54:15.544629 update_engine[1688]: I20250115 12:54:15.544535 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 15 12:54:15.544629 update_engine[1688]: I20250115 12:54:15.544544 1688 omaha_request_action.cc:617] Omaha request response: Jan 15 12:54:15.544629 update_engine[1688]: I20250115 12:54:15.544551 1688 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 12:54:15.544629 update_engine[1688]: I20250115 12:54:15.544556 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 15 12:54:15.544629 update_engine[1688]: I20250115 12:54:15.544561 1688 update_attempter.cc:306] Processing Done. Jan 15 12:54:15.544629 update_engine[1688]: I20250115 12:54:15.544569 1688 update_attempter.cc:310] Error event sent. Jan 15 12:54:15.544629 update_engine[1688]: I20250115 12:54:15.544579 1688 update_check_scheduler.cc:74] Next update check in 48m10s Jan 15 12:54:15.544964 locksmithd[1773]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 15 12:54:15.903503 sshd[6132]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:15.907286 systemd-logind[1685]: Session 18 logged out. Waiting for processes to exit. Jan 15 12:54:15.908313 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:59240.service: Deactivated successfully. Jan 15 12:54:15.910286 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 12:54:15.911483 systemd-logind[1685]: Removed session 18. Jan 15 12:54:15.988203 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:56672.service - OpenSSH per-connection server daemon (10.200.16.10:56672). Jan 15 12:54:16.413257 sshd[6147]: Accepted publickey for core from 10.200.16.10 port 56672 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:16.414634 sshd[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:16.419119 systemd-logind[1685]: New session 19 of user core. Jan 15 12:54:16.423104 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 12:54:16.891173 sshd[6147]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:16.894740 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:56672.service: Deactivated successfully. Jan 15 12:54:16.896477 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 12:54:16.897202 systemd-logind[1685]: Session 19 logged out. Waiting for processes to exit. Jan 15 12:54:16.898587 systemd-logind[1685]: Removed session 19. Jan 15 12:54:16.983229 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:56676.service - OpenSSH per-connection server daemon (10.200.16.10:56676). Jan 15 12:54:17.444061 sshd[6158]: Accepted publickey for core from 10.200.16.10 port 56676 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:17.446268 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:17.450127 systemd-logind[1685]: New session 20 of user core. Jan 15 12:54:17.461083 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 12:54:19.584023 sshd[6158]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:19.587103 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:56676.service: Deactivated successfully. Jan 15 12:54:19.590281 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 12:54:19.591919 systemd-logind[1685]: Session 20 logged out. Waiting for processes to exit. Jan 15 12:54:19.593563 systemd-logind[1685]: Removed session 20. Jan 15 12:54:19.668099 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:56678.service - OpenSSH per-connection server daemon (10.200.16.10:56678). Jan 15 12:54:20.142301 sshd[6177]: Accepted publickey for core from 10.200.16.10 port 56678 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:20.144193 sshd[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:20.149096 systemd-logind[1685]: New session 21 of user core. Jan 15 12:54:20.156110 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 12:54:20.655114 sshd[6177]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:20.659226 systemd-logind[1685]: Session 21 logged out. Waiting for processes to exit. Jan 15 12:54:20.660063 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:56678.service: Deactivated successfully. Jan 15 12:54:20.662461 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 12:54:20.663501 systemd-logind[1685]: Removed session 21. Jan 15 12:54:20.731693 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:56686.service - OpenSSH per-connection server daemon (10.200.16.10:56686). Jan 15 12:54:21.159773 sshd[6187]: Accepted publickey for core from 10.200.16.10 port 56686 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:21.161258 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:21.165087 systemd-logind[1685]: New session 22 of user core. Jan 15 12:54:21.175076 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 12:54:21.536679 sshd[6187]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:21.540685 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:56686.service: Deactivated successfully. Jan 15 12:54:21.543166 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 12:54:21.544025 systemd-logind[1685]: Session 22 logged out. Waiting for processes to exit. Jan 15 12:54:21.545910 systemd-logind[1685]: Removed session 22. Jan 15 12:54:24.083216 systemd[1]: run-containerd-runc-k8s.io-cefd14f7281489f2a621c1d57c5d8147ba12d219ce1667c3eae9538f14495329-runc.ZBbeKp.mount: Deactivated successfully. Jan 15 12:54:26.623614 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:54076.service - OpenSSH per-connection server daemon (10.200.16.10:54076). Jan 15 12:54:27.085475 sshd[6224]: Accepted publickey for core from 10.200.16.10 port 54076 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:27.086827 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:27.091356 systemd-logind[1685]: New session 23 of user core. Jan 15 12:54:27.098054 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 12:54:27.488787 sshd[6224]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:27.492595 systemd-logind[1685]: Session 23 logged out. Waiting for processes to exit. Jan 15 12:54:27.493179 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:54076.service: Deactivated successfully. Jan 15 12:54:27.495130 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 12:54:27.496199 systemd-logind[1685]: Removed session 23. Jan 15 12:54:32.571583 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:54082.service - OpenSSH per-connection server daemon (10.200.16.10:54082). Jan 15 12:54:33.032902 sshd[6236]: Accepted publickey for core from 10.200.16.10 port 54082 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:33.034342 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:33.039023 systemd-logind[1685]: New session 24 of user core. Jan 15 12:54:33.042067 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 12:54:33.434194 sshd[6236]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:33.437422 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:54082.service: Deactivated successfully. Jan 15 12:54:33.439716 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 12:54:33.441307 systemd-logind[1685]: Session 24 logged out. Waiting for processes to exit. Jan 15 12:54:33.442847 systemd-logind[1685]: Removed session 24. Jan 15 12:54:38.519253 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:57082.service - OpenSSH per-connection server daemon (10.200.16.10:57082). Jan 15 12:54:38.988235 sshd[6275]: Accepted publickey for core from 10.200.16.10 port 57082 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:38.989604 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:38.994670 systemd-logind[1685]: New session 25 of user core. Jan 15 12:54:39.001078 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 12:54:39.386011 sshd[6275]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:39.388573 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:57082.service: Deactivated successfully. Jan 15 12:54:39.390371 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 12:54:39.391809 systemd-logind[1685]: Session 25 logged out. Waiting for processes to exit. Jan 15 12:54:39.392699 systemd-logind[1685]: Removed session 25. Jan 15 12:54:44.476200 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:57086.service - OpenSSH per-connection server daemon (10.200.16.10:57086). Jan 15 12:54:44.938078 sshd[6309]: Accepted publickey for core from 10.200.16.10 port 57086 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:44.939800 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:44.946630 systemd-logind[1685]: New session 26 of user core. Jan 15 12:54:44.951211 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 12:54:45.341408 sshd[6309]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:45.344056 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:57086.service: Deactivated successfully. Jan 15 12:54:45.345819 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 12:54:45.347891 systemd-logind[1685]: Session 26 logged out. Waiting for processes to exit. Jan 15 12:54:45.348990 systemd-logind[1685]: Removed session 26. Jan 15 12:54:50.429221 systemd[1]: Started sshd@24-10.200.20.18:22-10.200.16.10:53306.service - OpenSSH per-connection server daemon (10.200.16.10:53306). Jan 15 12:54:50.894517 sshd[6328]: Accepted publickey for core from 10.200.16.10 port 53306 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:50.895989 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:50.901111 systemd-logind[1685]: New session 27 of user core. Jan 15 12:54:50.909204 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 12:54:51.300955 sshd[6328]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:51.303745 systemd[1]: sshd@24-10.200.20.18:22-10.200.16.10:53306.service: Deactivated successfully. Jan 15 12:54:51.308821 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 12:54:51.310984 systemd-logind[1685]: Session 27 logged out. Waiting for processes to exit. Jan 15 12:54:51.313995 systemd-logind[1685]: Removed session 27. Jan 15 12:54:56.382984 systemd[1]: Started sshd@25-10.200.20.18:22-10.200.16.10:39520.service - OpenSSH per-connection server daemon (10.200.16.10:39520). Jan 15 12:54:56.848017 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 39520 ssh2: RSA SHA256:3TKB8H62jxUP/z4JZRDHwyyFOgqyGuw8iIOU8t12cZM Jan 15 12:54:56.849622 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 12:54:56.854061 systemd-logind[1685]: New session 28 of user core. Jan 15 12:54:56.860106 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 15 12:54:57.243197 sshd[6364]: pam_unix(sshd:session): session closed for user core Jan 15 12:54:57.247518 systemd-logind[1685]: Session 28 logged out. Waiting for processes to exit. Jan 15 12:54:57.248255 systemd[1]: sshd@25-10.200.20.18:22-10.200.16.10:39520.service: Deactivated successfully. Jan 15 12:54:57.250795 systemd[1]: session-28.scope: Deactivated successfully. Jan 15 12:54:57.252775 systemd-logind[1685]: Removed session 28.