Feb 9 09:54:11.097494 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:54:11.097513 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:54:11.097520 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 09:54:11.097527 kernel: printk: bootconsole [pl11] enabled Feb 9 09:54:11.097532 kernel: efi: EFI v2.70 by EDK II Feb 9 09:54:11.097538 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 09:54:11.097544 kernel: random: crng init done Feb 9 09:54:11.097550 kernel: ACPI: Early table checksum verification disabled Feb 9 09:54:11.097555 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 09:54:11.097561 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097566 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097572 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 09:54:11.097578 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097583 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097596 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097601 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097608 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097614 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097620 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 09:54:11.097626 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:54:11.097631 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 09:54:11.097637 kernel: NUMA: Failed to initialise from firmware Feb 9 09:54:11.097643 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:11.097648 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 09:54:11.101718 kernel: Zone ranges: Feb 9 09:54:11.101736 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 09:54:11.101743 kernel: DMA32 empty Feb 9 09:54:11.101753 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:11.101759 kernel: Movable zone start for each node Feb 9 09:54:11.101765 kernel: Early memory node ranges Feb 9 09:54:11.101771 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 09:54:11.101777 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 09:54:11.101782 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 09:54:11.101788 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 09:54:11.101794 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 09:54:11.101799 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 09:54:11.101805 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 09:54:11.101811 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 09:54:11.101817 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:54:11.101824 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:54:11.101833 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 09:54:11.101839 kernel: psci: probing for conduit method from ACPI. Feb 9 09:54:11.101845 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:54:11.101851 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:54:11.101858 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 09:54:11.101865 kernel: psci: SMC Calling Convention v1.4 Feb 9 09:54:11.101871 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 09:54:11.101877 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 09:54:11.101883 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:54:11.101889 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:54:11.101896 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:54:11.101902 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:54:11.101908 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:54:11.101914 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:54:11.101921 kernel: CPU features: detected: Spectre-BHB Feb 9 09:54:11.101927 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:54:11.101934 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:54:11.101940 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:54:11.101946 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 09:54:11.101952 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 09:54:11.101958 kernel: Policy zone: Normal Feb 9 09:54:11.101966 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:11.101973 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:54:11.101979 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:54:11.101986 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:54:11.101992 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:54:11.101999 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 09:54:11.102006 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 09:54:11.102012 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:54:11.102018 kernel: trace event string verifier disabled Feb 9 09:54:11.102024 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:54:11.102031 kernel: rcu: RCU event tracing is enabled. Feb 9 09:54:11.102037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:54:11.102043 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:54:11.102050 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:54:11.102056 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:54:11.102062 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:54:11.102069 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:54:11.102075 kernel: GICv3: 960 SPIs implemented Feb 9 09:54:11.102081 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:54:11.102087 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:54:11.102093 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:54:11.102099 kernel: GICv3: 16 PPIs implemented Feb 9 09:54:11.102105 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 09:54:11.102111 kernel: ITS: No ITS available, not enabling LPIs Feb 9 09:54:11.102118 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:11.102124 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:54:11.102130 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:54:11.102136 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:54:11.102144 kernel: Console: colour dummy device 80x25 Feb 9 09:54:11.102151 kernel: printk: console [tty1] enabled Feb 9 09:54:11.102157 kernel: ACPI: Core revision 20210730 Feb 9 09:54:11.102163 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:54:11.102170 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:54:11.102176 kernel: LSM: Security Framework initializing Feb 9 09:54:11.102182 kernel: SELinux: Initializing. Feb 9 09:54:11.102189 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:11.102195 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:54:11.102202 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 09:54:11.102209 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 09:54:11.102215 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:54:11.102221 kernel: Remapping and enabling EFI services. Feb 9 09:54:11.102227 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:54:11.102234 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:54:11.102240 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 09:54:11.102247 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:54:11.102253 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:54:11.102261 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:54:11.102267 kernel: SMP: Total of 2 processors activated. Feb 9 09:54:11.102273 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:54:11.102280 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 09:54:11.102286 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:54:11.102293 kernel: CPU features: detected: CRC32 instructions Feb 9 09:54:11.102299 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:54:11.102305 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:54:11.102312 kernel: CPU features: detected: Privileged Access Never Feb 9 09:54:11.102319 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:54:11.102325 kernel: alternatives: patching kernel code Feb 9 09:54:11.102336 kernel: devtmpfs: initialized Feb 9 09:54:11.102344 kernel: KASLR enabled Feb 9 09:54:11.102351 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:54:11.102358 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:54:11.102364 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:54:11.102371 kernel: SMBIOS 3.1.0 present. Feb 9 09:54:11.102378 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 09:54:11.102385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:54:11.102392 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:54:11.102399 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:54:11.102406 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:54:11.102413 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:54:11.102419 kernel: audit: type=2000 audit(0.095:1): state=initialized audit_enabled=0 res=1 Feb 9 09:54:11.102426 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:54:11.102433 kernel: cpuidle: using governor menu Feb 9 09:54:11.102441 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:54:11.102447 kernel: ASID allocator initialised with 32768 entries Feb 9 09:54:11.102454 kernel: ACPI: bus type PCI registered Feb 9 09:54:11.102461 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:54:11.102467 kernel: Serial: AMBA PL011 UART driver Feb 9 09:54:11.102474 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:54:11.102481 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:54:11.102487 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:54:11.102494 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:54:11.102502 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:54:11.102509 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:54:11.102515 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:54:11.102522 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:54:11.102529 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:54:11.102535 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:54:11.102542 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:54:11.102549 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:54:11.102555 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:54:11.102563 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:54:11.102570 kernel: ACPI: Interpreter enabled Feb 9 09:54:11.102577 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:54:11.102583 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:54:11.102590 kernel: printk: console [ttyAMA0] enabled Feb 9 09:54:11.102597 kernel: printk: bootconsole [pl11] disabled Feb 9 09:54:11.102604 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 09:54:11.102610 kernel: iommu: Default domain type: Translated Feb 9 09:54:11.102617 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:54:11.102625 kernel: vgaarb: loaded Feb 9 09:54:11.102631 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:54:11.102638 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:54:11.102645 kernel: PTP clock support registered Feb 9 09:54:11.102651 kernel: Registered efivars operations Feb 9 09:54:11.102666 kernel: No ACPI PMU IRQ for CPU0 Feb 9 09:54:11.102673 kernel: No ACPI PMU IRQ for CPU1 Feb 9 09:54:11.102680 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:54:11.102686 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:54:11.102694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:54:11.102701 kernel: pnp: PnP ACPI init Feb 9 09:54:11.102707 kernel: pnp: PnP ACPI: found 0 devices Feb 9 09:54:11.102714 kernel: NET: Registered PF_INET protocol family Feb 9 09:54:11.102721 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:54:11.102728 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:54:11.102735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:54:11.102742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:54:11.102749 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:54:11.102757 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:54:11.102763 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:11.102770 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:54:11.102777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:54:11.102783 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:54:11.102790 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 09:54:11.102797 kernel: kvm [1]: HYP mode not available Feb 9 09:54:11.102803 kernel: Initialise system trusted keyrings Feb 9 09:54:11.102810 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:54:11.102818 kernel: Key type asymmetric registered Feb 9 09:54:11.102825 kernel: Asymmetric key parser 'x509' registered Feb 9 09:54:11.102831 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:54:11.102838 kernel: io scheduler mq-deadline registered Feb 9 09:54:11.102845 kernel: io scheduler kyber registered Feb 9 09:54:11.102851 kernel: io scheduler bfq registered Feb 9 09:54:11.102858 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:54:11.102865 kernel: thunder_xcv, ver 1.0 Feb 9 09:54:11.102871 kernel: thunder_bgx, ver 1.0 Feb 9 09:54:11.102879 kernel: nicpf, ver 1.0 Feb 9 09:54:11.102885 kernel: nicvf, ver 1.0 Feb 9 09:54:11.103008 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:54:11.103070 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:54:10 UTC (1707472450) Feb 9 09:54:11.103079 kernel: efifb: probing for efifb Feb 9 09:54:11.103086 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 09:54:11.103093 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 09:54:11.103099 kernel: efifb: scrolling: redraw Feb 9 09:54:11.103108 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 09:54:11.103115 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:11.103121 kernel: fb0: EFI VGA frame buffer device Feb 9 09:54:11.103128 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 09:54:11.103134 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:54:11.103141 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:54:11.103148 kernel: Segment Routing with IPv6 Feb 9 09:54:11.103154 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:54:11.103161 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:54:11.103169 kernel: Key type dns_resolver registered Feb 9 09:54:11.103175 kernel: registered taskstats version 1 Feb 9 09:54:11.103182 kernel: Loading compiled-in X.509 certificates Feb 9 09:54:11.103189 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:54:11.103196 kernel: Key type .fscrypt registered Feb 9 09:54:11.103203 kernel: Key type fscrypt-provisioning registered Feb 9 09:54:11.103209 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:54:11.103216 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:54:11.103223 kernel: ima: No architecture policies found Feb 9 09:54:11.103231 kernel: Freeing unused kernel memory: 34688K Feb 9 09:54:11.103237 kernel: Run /init as init process Feb 9 09:54:11.103244 kernel: with arguments: Feb 9 09:54:11.103250 kernel: /init Feb 9 09:54:11.103256 kernel: with environment: Feb 9 09:54:11.103263 kernel: HOME=/ Feb 9 09:54:11.103269 kernel: TERM=linux Feb 9 09:54:11.103276 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:54:11.103285 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:11.103295 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:11.103303 systemd[1]: Detected architecture arm64. Feb 9 09:54:11.103310 systemd[1]: Running in initrd. Feb 9 09:54:11.103317 systemd[1]: No hostname configured, using default hostname. Feb 9 09:54:11.103323 systemd[1]: Hostname set to . Feb 9 09:54:11.103331 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:11.103338 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:54:11.103346 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:11.103354 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:11.103361 systemd[1]: Reached target paths.target. Feb 9 09:54:11.103368 systemd[1]: Reached target slices.target. Feb 9 09:54:11.103375 systemd[1]: Reached target swap.target. Feb 9 09:54:11.103382 systemd[1]: Reached target timers.target. Feb 9 09:54:11.103389 systemd[1]: Listening on iscsid.socket. Feb 9 09:54:11.103396 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:54:11.103405 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:54:11.103412 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:54:11.103419 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:54:11.103426 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:11.103433 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:11.103440 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:11.103448 systemd[1]: Reached target sockets.target. Feb 9 09:54:11.103455 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:11.103462 systemd[1]: Finished network-cleanup.service. Feb 9 09:54:11.103470 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:54:11.103477 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:11.103484 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:11.103491 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:11.103499 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:54:11.103509 systemd-journald[276]: Journal started Feb 9 09:54:11.103548 systemd-journald[276]: Runtime Journal (/run/log/journal/939d5b8e0bec4c57943e5e2784171ab4) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:11.093710 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 09:54:11.128405 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:54:11.143202 systemd-resolved[278]: Positive Trust Anchors: Feb 9 09:54:11.150991 kernel: Bridge firewalling registered Feb 9 09:54:11.151012 systemd[1]: Started systemd-journald.service. Feb 9 09:54:11.143368 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:11.192442 kernel: audit: type=1130 audit(1707472451.164:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.143396 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:11.269755 kernel: SCSI subsystem initialized Feb 9 09:54:11.269777 kernel: audit: type=1130 audit(1707472451.204:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.269787 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:54:11.269797 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:54:11.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.156430 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 09:54:11.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.157182 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 09:54:11.332347 kernel: audit: type=1130 audit(1707472451.274:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.332368 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:54:11.332377 kernel: audit: type=1130 audit(1707472451.308:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.192381 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:11.359533 kernel: audit: type=1130 audit(1707472451.337:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.205691 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:11.275162 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:54:11.308649 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:54:11.337840 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:11.363705 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:54:11.372347 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 09:54:11.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.373803 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:54:11.470995 kernel: audit: type=1130 audit(1707472451.407:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.471027 kernel: audit: type=1130 audit(1707472451.435:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.396317 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:11.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.427859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:54:11.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.436938 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:11.534914 kernel: audit: type=1130 audit(1707472451.470:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.534938 kernel: audit: type=1130 audit(1707472451.501:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.465756 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:54:11.471411 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:11.507102 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:54:11.554358 dracut-cmdline[299]: dracut-dracut-053 Feb 9 09:54:11.559446 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:54:11.655684 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:54:11.667681 kernel: iscsi: registered transport (tcp) Feb 9 09:54:11.688281 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:54:11.688298 kernel: QLogic iSCSI HBA Driver Feb 9 09:54:11.718020 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:54:11.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:11.724378 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:54:11.783674 kernel: raid6: neonx8 gen() 13770 MB/s Feb 9 09:54:11.801665 kernel: raid6: neonx8 xor() 10831 MB/s Feb 9 09:54:11.822667 kernel: raid6: neonx4 gen() 13517 MB/s Feb 9 09:54:11.844665 kernel: raid6: neonx4 xor() 11131 MB/s Feb 9 09:54:11.865664 kernel: raid6: neonx2 gen() 12930 MB/s Feb 9 09:54:11.886674 kernel: raid6: neonx2 xor() 10249 MB/s Feb 9 09:54:11.908666 kernel: raid6: neonx1 gen() 10485 MB/s Feb 9 09:54:11.929663 kernel: raid6: neonx1 xor() 8795 MB/s Feb 9 09:54:11.950665 kernel: raid6: int64x8 gen() 6297 MB/s Feb 9 09:54:11.972664 kernel: raid6: int64x8 xor() 3549 MB/s Feb 9 09:54:11.993663 kernel: raid6: int64x4 gen() 7217 MB/s Feb 9 09:54:12.015664 kernel: raid6: int64x4 xor() 3854 MB/s Feb 9 09:54:12.036663 kernel: raid6: int64x2 gen() 6156 MB/s Feb 9 09:54:12.057663 kernel: raid6: int64x2 xor() 3323 MB/s Feb 9 09:54:12.080663 kernel: raid6: int64x1 gen() 5041 MB/s Feb 9 09:54:12.106121 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 09:54:12.106130 kernel: raid6: using algorithm neonx8 gen() 13770 MB/s Feb 9 09:54:12.106139 kernel: raid6: .... xor() 10831 MB/s, rmw enabled Feb 9 09:54:12.110987 kernel: raid6: using neon recovery algorithm Feb 9 09:54:12.137354 kernel: xor: measuring software checksum speed Feb 9 09:54:12.137377 kernel: 8regs : 17300 MB/sec Feb 9 09:54:12.146545 kernel: 32regs : 20749 MB/sec Feb 9 09:54:12.146555 kernel: arm64_neon : 27778 MB/sec Feb 9 09:54:12.146563 kernel: xor: using function: arm64_neon (27778 MB/sec) Feb 9 09:54:12.209669 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:54:12.225125 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:54:12.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.235000 audit: BPF prog-id=7 op=LOAD Feb 9 09:54:12.235000 audit: BPF prog-id=8 op=LOAD Feb 9 09:54:12.236540 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:12.259825 systemd-udevd[476]: Using default interface naming scheme 'v252'. Feb 9 09:54:12.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.267365 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:12.280921 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:54:12.297740 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 09:54:12.329078 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:54:12.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.334459 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:12.368335 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:12.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:12.437684 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 09:54:12.444674 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 09:54:12.444717 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 09:54:12.467681 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 09:54:12.467731 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 09:54:12.477249 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 09:54:12.485920 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 09:54:12.494680 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 09:54:12.502808 kernel: scsi host1: storvsc_host_t Feb 9 09:54:12.502883 kernel: scsi host0: storvsc_host_t Feb 9 09:54:12.503009 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 09:54:12.515874 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 09:54:12.536681 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 09:54:12.536868 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 09:54:12.536878 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 09:54:12.546767 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:54:12.546903 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:54:12.557811 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 09:54:12.557949 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 09:54:12.559674 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 09:54:12.565669 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:12.571686 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:54:12.587680 kernel: hv_netvsc 000d3af6-1a26-000d-3af6-1a26000d3af6 eth0: VF slot 1 added Feb 9 09:54:12.595686 kernel: hv_vmbus: registering driver hv_pci Feb 9 09:54:12.606678 kernel: hv_pci dfbb81e8-37e6-47b4-8d44-a8ebcf7c2d57: PCI VMBus probing: Using version 0x10004 Feb 9 09:54:12.627828 kernel: hv_pci dfbb81e8-37e6-47b4-8d44-a8ebcf7c2d57: PCI host bridge to bus 37e6:00 Feb 9 09:54:12.628020 kernel: pci_bus 37e6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 09:54:12.628115 kernel: pci_bus 37e6:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 09:54:12.644123 kernel: pci 37e6:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 09:54:12.657810 kernel: pci 37e6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:12.683034 kernel: pci 37e6:00:02.0: enabling Extended Tags Feb 9 09:54:12.706721 kernel: pci 37e6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 37e6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 09:54:12.720358 kernel: pci_bus 37e6:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 09:54:12.720525 kernel: pci 37e6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:54:12.762673 kernel: mlx5_core 37e6:00:02.0: firmware version: 16.30.1284 Feb 9 09:54:12.924670 kernel: mlx5_core 37e6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 09:54:12.974393 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:54:12.999260 kernel: hv_netvsc 000d3af6-1a26-000d-3af6-1a26000d3af6 eth0: VF registering: eth1 Feb 9 09:54:12.999634 kernel: mlx5_core 37e6:00:02.0 eth1: joined to eth0 Feb 9 09:54:13.010685 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (534) Feb 9 09:54:13.017679 kernel: mlx5_core 37e6:00:02.0 enP14310s1: renamed from eth1 Feb 9 09:54:13.031484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:13.209256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:54:13.228352 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:54:13.235369 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:54:13.250878 systemd[1]: Starting disk-uuid.service... Feb 9 09:54:13.273341 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:13.280666 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:14.289678 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:54:14.289734 disk-uuid[604]: The operation has completed successfully. Feb 9 09:54:14.347842 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:54:14.349799 systemd[1]: Finished disk-uuid.service. Feb 9 09:54:14.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.362841 systemd[1]: Starting verity-setup.service... Feb 9 09:54:14.406681 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:54:14.587685 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:54:14.593852 systemd[1]: Finished verity-setup.service. Feb 9 09:54:14.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.605546 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:54:14.673418 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:54:14.683738 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:54:14.678815 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:54:14.679566 systemd[1]: Starting ignition-setup.service... Feb 9 09:54:14.689547 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:54:14.739203 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:14.739227 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:14.745639 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:14.795856 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:54:14.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.807000 audit: BPF prog-id=9 op=LOAD Feb 9 09:54:14.808505 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:14.828845 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:54:14.842458 systemd-networkd[871]: lo: Link UP Feb 9 09:54:14.842468 systemd-networkd[871]: lo: Gained carrier Feb 9 09:54:14.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.842873 systemd-networkd[871]: Enumeration completed Feb 9 09:54:14.847205 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:14.853578 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:14.854090 systemd[1]: Reached target network.target. Feb 9 09:54:14.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.870087 systemd[1]: Starting iscsiuio.service... Feb 9 09:54:14.908006 iscsid[880]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:14.908006 iscsid[880]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:54:14.908006 iscsid[880]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:54:14.908006 iscsid[880]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:54:14.908006 iscsid[880]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:54:14.908006 iscsid[880]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:14.908006 iscsid[880]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:54:14.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.880672 systemd[1]: Started iscsiuio.service. Feb 9 09:54:15.046359 kernel: kauditd_printk_skb: 15 callbacks suppressed Feb 9 09:54:15.046381 kernel: audit: type=1130 audit(1707472454.999:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:14.901510 systemd[1]: Starting iscsid.service... Feb 9 09:54:14.912751 systemd[1]: Started iscsid.service. Feb 9 09:54:14.950037 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:54:14.989421 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:54:15.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.018870 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:54:15.096903 kernel: audit: type=1130 audit(1707472455.072:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.038002 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:15.049858 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:15.058836 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:54:15.067963 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:54:15.121585 systemd[1]: Finished ignition-setup.service. Feb 9 09:54:15.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.146750 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:54:15.157314 kernel: audit: type=1130 audit(1707472455.125:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:15.162676 kernel: mlx5_core 37e6:00:02.0 enP14310s1: Link up Feb 9 09:54:15.210365 kernel: hv_netvsc 000d3af6-1a26-000d-3af6-1a26000d3af6 eth0: Data path switched to VF: enP14310s1 Feb 9 09:54:15.211555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:54:15.210705 systemd-networkd[871]: enP14310s1: Link UP Feb 9 09:54:15.210876 systemd-networkd[871]: eth0: Link UP Feb 9 09:54:15.211246 systemd-networkd[871]: eth0: Gained carrier Feb 9 09:54:15.223010 systemd-networkd[871]: enP14310s1: Gained carrier Feb 9 09:54:15.237720 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:16.878780 systemd-networkd[871]: eth0: Gained IPv6LL Feb 9 09:54:18.329751 ignition[895]: Ignition 2.14.0 Feb 9 09:54:18.329763 ignition[895]: Stage: fetch-offline Feb 9 09:54:18.329821 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:18.329845 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:18.425135 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:18.425281 ignition[895]: parsed url from cmdline: "" Feb 9 09:54:18.425284 ignition[895]: no config URL provided Feb 9 09:54:18.425290 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:18.492096 kernel: audit: type=1130 audit(1707472458.456:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.444897 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:54:18.425297 ignition[895]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:18.458180 systemd[1]: Starting ignition-fetch.service... Feb 9 09:54:18.425303 ignition[895]: failed to fetch config: resource requires networking Feb 9 09:54:18.425523 ignition[895]: Ignition finished successfully Feb 9 09:54:18.495645 ignition[901]: Ignition 2.14.0 Feb 9 09:54:18.495665 ignition[901]: Stage: fetch Feb 9 09:54:18.495769 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:18.495790 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:18.507115 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:18.507248 ignition[901]: parsed url from cmdline: "" Feb 9 09:54:18.507252 ignition[901]: no config URL provided Feb 9 09:54:18.507258 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:18.507265 ignition[901]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:54:18.507297 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 09:54:18.596151 ignition[901]: GET result: OK Feb 9 09:54:18.596254 ignition[901]: config has been read from IMDS userdata Feb 9 09:54:18.596320 ignition[901]: parsing config with SHA512: f640c656a04626724048558306cd7a1900cdf42afd8e99160717f9f69c83d168aa8107f4b6aad6c0996d06d42d606f29602d044cb2b6207aca65809784e61600 Feb 9 09:54:18.631312 unknown[901]: fetched base config from "system" Feb 9 09:54:18.631326 unknown[901]: fetched base config from "system" Feb 9 09:54:18.632013 ignition[901]: fetch: fetch complete Feb 9 09:54:18.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.631332 unknown[901]: fetched user config from "azure" Feb 9 09:54:18.686179 kernel: audit: type=1130 audit(1707472458.651:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.632018 ignition[901]: fetch: fetch passed Feb 9 09:54:18.645388 systemd[1]: Finished ignition-fetch.service. Feb 9 09:54:18.632066 ignition[901]: Ignition finished successfully Feb 9 09:54:18.675253 systemd[1]: Starting ignition-kargs.service... Feb 9 09:54:18.687538 ignition[907]: Ignition 2.14.0 Feb 9 09:54:18.687545 ignition[907]: Stage: kargs Feb 9 09:54:18.687680 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:18.764373 kernel: audit: type=1130 audit(1707472458.722:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.717097 systemd[1]: Finished ignition-kargs.service. Feb 9 09:54:18.687706 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:18.723986 systemd[1]: Starting ignition-disks.service... Feb 9 09:54:18.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.691260 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:18.764485 systemd[1]: Finished ignition-disks.service. Feb 9 09:54:18.823301 kernel: audit: type=1130 audit(1707472458.776:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.713971 ignition[907]: kargs: kargs passed Feb 9 09:54:18.804198 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:54:18.714040 ignition[907]: Ignition finished successfully Feb 9 09:54:18.818039 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:18.736999 ignition[913]: Ignition 2.14.0 Feb 9 09:54:18.829362 systemd[1]: Reached target local-fs.target. Feb 9 09:54:18.737006 ignition[913]: Stage: disks Feb 9 09:54:18.839342 systemd[1]: Reached target sysinit.target. Feb 9 09:54:18.737116 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:18.850969 systemd[1]: Reached target basic.target. Feb 9 09:54:18.737135 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:18.868122 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:54:18.739821 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:18.763458 ignition[913]: disks: disks passed Feb 9 09:54:18.763524 ignition[913]: Ignition finished successfully Feb 9 09:54:18.930708 systemd-fsck[921]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 09:54:18.941134 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:54:18.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:18.974534 systemd[1]: Mounting sysroot.mount... Feb 9 09:54:18.982746 kernel: audit: type=1130 audit(1707472458.949:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.005671 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:54:19.006249 systemd[1]: Mounted sysroot.mount. Feb 9 09:54:19.010642 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:54:19.055823 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:54:19.060907 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:54:19.069847 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:54:19.069881 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:54:19.075443 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:54:19.113269 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:19.118281 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:54:19.140677 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (932) Feb 9 09:54:19.148394 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:54:19.159079 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:19.159101 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:19.163601 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:19.169262 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:19.209763 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:54:19.219826 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:54:19.229700 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:54:19.680581 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:54:19.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.686338 systemd[1]: Starting ignition-mount.service... Feb 9 09:54:19.729520 kernel: audit: type=1130 audit(1707472459.685:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.719195 systemd[1]: Starting sysroot-boot.service... Feb 9 09:54:19.727267 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:19.727429 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:19.756864 systemd[1]: Finished sysroot-boot.service. Feb 9 09:54:19.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.788404 kernel: audit: type=1130 audit(1707472459.761:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.794785 ignition[1000]: INFO : Ignition 2.14.0 Feb 9 09:54:19.800097 ignition[1000]: INFO : Stage: mount Feb 9 09:54:19.800097 ignition[1000]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:19.800097 ignition[1000]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:19.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:19.834433 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:19.834433 ignition[1000]: INFO : mount: mount passed Feb 9 09:54:19.834433 ignition[1000]: INFO : Ignition finished successfully Feb 9 09:54:19.804612 systemd[1]: Finished ignition-mount.service. Feb 9 09:54:20.573815 coreos-metadata[931]: Feb 09 09:54:20.573 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:54:20.586679 coreos-metadata[931]: Feb 09 09:54:20.586 INFO Fetch successful Feb 9 09:54:20.623624 coreos-metadata[931]: Feb 09 09:54:20.623 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:54:20.652899 coreos-metadata[931]: Feb 09 09:54:20.652 INFO Fetch successful Feb 9 09:54:20.674142 coreos-metadata[931]: Feb 09 09:54:20.674 INFO wrote hostname ci-3510.3.2-a-8b452ef1bd to /sysroot/etc/hostname Feb 9 09:54:20.685404 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:54:20.728484 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 09:54:20.728508 kernel: audit: type=1130 audit(1707472460.692:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:20.693188 systemd[1]: Starting ignition-files.service... Feb 9 09:54:20.735260 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:20.764671 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1010) Feb 9 09:54:20.779894 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:54:20.779907 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:54:20.779917 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:54:20.790200 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:20.806902 ignition[1029]: INFO : Ignition 2.14.0 Feb 9 09:54:20.806902 ignition[1029]: INFO : Stage: files Feb 9 09:54:20.820036 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:20.820036 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:20.820036 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:20.820036 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:54:20.862252 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:54:20.862252 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:54:20.926023 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:54:20.936353 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:54:20.951537 unknown[1029]: wrote ssh authorized keys file for user: core Feb 9 09:54:20.960146 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:54:20.969783 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:54:20.969783 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:21.310010 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:54:21.465307 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:54:21.489738 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:54:21.489738 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:21.489738 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:21.627311 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:54:21.848694 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:54:21.848694 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:54:21.848694 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:54:22.095971 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:54:22.295576 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:54:22.316802 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:54:22.316802 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:22.316802 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:54:22.449521 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:54:22.734934 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:54:22.754423 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:22.754423 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:22.754423 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:54:22.794849 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:54:23.084686 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:54:23.102990 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:23.102990 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:23.102990 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:54:23.142090 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:54:23.890013 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:54:23.909506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:23.909506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:23.909506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:23.909506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:23.909506 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 09:54:24.305377 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 09:54:24.380644 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:24.391498 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:24.695338 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:24.707081 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:24.707081 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:24.707081 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:24.765183 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1034) Feb 9 09:54:24.741087 systemd[1]: mnt-oem3437450838.mount: Deactivated successfully. Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3437450838" Feb 9 09:54:24.771118 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3437450838": device or resource busy Feb 9 09:54:24.771118 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3437450838", trying btrfs: device or resource busy Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3437450838" Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3437450838" Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3437450838" Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3437450838" Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3067826755" Feb 9 09:54:24.771118 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3067826755": device or resource busy Feb 9 09:54:24.771118 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3067826755", trying btrfs: device or resource busy Feb 9 09:54:24.771118 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3067826755" Feb 9 09:54:25.045117 kernel: audit: type=1130 audit(1707472464.822:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.045149 kernel: audit: type=1130 audit(1707472464.903:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.045159 kernel: audit: type=1131 audit(1707472464.935:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:24.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:24.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:24.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.045265 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3067826755" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3067826755" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3067826755" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:25.045265 ignition[1029]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:25.368721 kernel: audit: type=1130 audit(1707472465.120:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.368755 kernel: audit: type=1130 audit(1707472465.222:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.368768 kernel: audit: type=1131 audit(1707472465.255:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:24.772435 systemd[1]: mnt-oem3067826755.mount: Deactivated successfully. Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:25.376735 ignition[1029]: INFO : files: files passed Feb 9 09:54:25.376735 ignition[1029]: INFO : Ignition finished successfully Feb 9 09:54:25.618940 kernel: audit: type=1130 audit(1707472465.381:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.618980 kernel: audit: type=1131 audit(1707472465.496:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:24.801169 systemd[1]: Finished ignition-files.service. Feb 9 09:54:25.629934 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:54:24.826002 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:54:24.863761 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:54:24.871907 systemd[1]: Starting ignition-quench.service... Feb 9 09:54:24.888446 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:54:25.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:24.888540 systemd[1]: Finished ignition-quench.service. Feb 9 09:54:25.114610 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:54:25.732191 kernel: audit: type=1131 audit(1707472465.690:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.121243 systemd[1]: Reached target ignition-complete.target. Feb 9 09:54:25.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.167577 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:54:25.210085 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:54:25.832585 kernel: audit: type=1131 audit(1707472465.736:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.832609 kernel: audit: type=1131 audit(1707472465.774:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.832619 kernel: audit: type=1131 audit(1707472465.802:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.210201 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:54:25.256553 systemd[1]: Reached target initrd-fs.target. Feb 9 09:54:25.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.297326 systemd[1]: Reached target initrd.target. Feb 9 09:54:25.868076 kernel: audit: type=1131 audit(1707472465.841:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.315536 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:54:25.324006 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:54:25.376059 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:54:25.418288 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:54:25.442040 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:54:25.449042 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:54:25.463918 systemd[1]: Stopped target timers.target. Feb 9 09:54:25.481231 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:54:25.481302 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:54:25.926270 ignition[1067]: INFO : Ignition 2.14.0 Feb 9 09:54:25.926270 ignition[1067]: INFO : Stage: umount Feb 9 09:54:25.926270 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:25.926270 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:54:25.926270 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:54:25.926270 ignition[1067]: INFO : umount: umount passed Feb 9 09:54:25.926270 ignition[1067]: INFO : Ignition finished successfully Feb 9 09:54:26.128548 kernel: audit: type=1131 audit(1707472465.925:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.128576 kernel: audit: type=1131 audit(1707472465.965:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.128586 kernel: audit: type=1131 audit(1707472466.002:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.128604 kernel: audit: type=1130 audit(1707472466.048:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.128614 kernel: audit: type=1131 audit(1707472466.048:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.128623 kernel: audit: type=1131 audit(1707472466.077:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.525565 systemd[1]: Stopped target initrd.target. Feb 9 09:54:26.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.540316 systemd[1]: Stopped target basic.target. Feb 9 09:54:25.554454 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:54:26.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.569407 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:54:25.585943 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:54:25.603548 systemd[1]: Stopped target remote-fs.target. Feb 9 09:54:25.613308 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:54:25.624301 systemd[1]: Stopped target sysinit.target. Feb 9 09:54:25.635007 systemd[1]: Stopped target local-fs.target. Feb 9 09:54:25.650903 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:54:25.662767 systemd[1]: Stopped target swap.target. Feb 9 09:54:26.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.680497 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:54:25.680555 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:54:25.716596 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:54:26.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.726513 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:54:26.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.726564 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:54:26.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.257000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:54:25.762712 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:54:25.762766 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:54:26.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.774450 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:54:25.774487 systemd[1]: Stopped ignition-files.service. Feb 9 09:54:26.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.802640 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:54:26.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.802698 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:54:26.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.905544 systemd[1]: Stopping ignition-mount.service... Feb 9 09:54:25.916316 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:54:26.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.916386 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:54:25.947889 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:54:26.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.955789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:54:26.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.955854 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:54:26.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.965159 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:54:26.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:25.965214 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:54:26.425035 kernel: hv_netvsc 000d3af6-1a26-000d-3af6-1a26000d3af6 eth0: Data path switched from VF: enP14310s1 Feb 9 09:54:26.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.002895 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:54:26.002997 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:54:26.049831 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:54:26.050292 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:54:26.050381 systemd[1]: Stopped ignition-mount.service. Feb 9 09:54:26.077851 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:54:26.077911 systemd[1]: Stopped ignition-disks.service. Feb 9 09:54:26.082270 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:54:26.082309 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:54:26.123916 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:54:26.123959 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:54:26.133231 systemd[1]: Stopped target network.target. Feb 9 09:54:26.141114 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:54:26.141159 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:54:26.150000 systemd[1]: Stopped target paths.target. Feb 9 09:54:26.158172 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:54:26.167176 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:54:26.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:26.171942 systemd[1]: Stopped target slices.target. Feb 9 09:54:26.179908 systemd[1]: Stopped target sockets.target. Feb 9 09:54:26.187846 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:54:26.187881 systemd[1]: Closed iscsid.socket. Feb 9 09:54:26.195070 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:54:26.195091 systemd[1]: Closed iscsiuio.socket. Feb 9 09:54:26.202470 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:54:26.202514 systemd[1]: Stopped ignition-setup.service. Feb 9 09:54:26.210855 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:26.596049 iscsid[880]: iscsid shutting down. Feb 9 09:54:26.218704 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:54:26.227700 systemd-networkd[871]: eth0: DHCPv6 lease lost Feb 9 09:54:26.595000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:54:26.228982 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:54:26.229060 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:54:26.238063 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:54:26.238153 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:54:26.247292 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:26.247383 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:26.257960 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:54:26.258006 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:54:26.270113 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:54:26.270162 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:54:26.286496 systemd[1]: Stopping network-cleanup.service... Feb 9 09:54:26.292055 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:54:26.292131 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:54:26.302170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:54:26.302227 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:54:26.316069 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:54:26.316110 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:54:26.596667 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 09:54:26.321208 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:54:26.330736 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:54:26.331263 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:54:26.331383 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:54:26.339037 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:54:26.339087 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:54:26.348728 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:54:26.348763 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:54:26.353581 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:54:26.353626 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:54:26.362341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:54:26.362386 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:54:26.370883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:54:26.370921 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:54:26.381396 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:54:26.390742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:54:26.390799 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:54:26.405623 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:54:26.405724 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:54:26.510683 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:54:26.510811 systemd[1]: Stopped network-cleanup.service. Feb 9 09:54:26.525049 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:54:26.536694 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:54:26.554099 systemd[1]: Switching root. Feb 9 09:54:26.597252 systemd-journald[276]: Journal stopped Feb 9 09:54:38.539798 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:54:38.539834 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:54:38.539845 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:54:38.539869 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:54:38.539891 kernel: SELinux: policy capability open_perms=1 Feb 9 09:54:38.539900 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:54:38.539909 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:54:38.539917 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:54:38.539943 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:54:38.539953 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:54:38.539977 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:54:38.539987 systemd[1]: Successfully loaded SELinux policy in 306.162ms. Feb 9 09:54:38.540010 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.088ms. Feb 9 09:54:38.540021 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:38.540047 systemd[1]: Detected virtualization microsoft. Feb 9 09:54:38.540056 systemd[1]: Detected architecture arm64. Feb 9 09:54:38.540065 systemd[1]: Detected first boot. Feb 9 09:54:38.540076 systemd[1]: Hostname set to . Feb 9 09:54:38.540085 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:38.540094 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:54:38.540120 kernel: kauditd_printk_skb: 29 callbacks suppressed Feb 9 09:54:38.540129 kernel: audit: type=1400 audit(1707472471.032:86): avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:54:38.540154 kernel: audit: type=1300 audit(1707472471.032:86): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227b2 a1=4000028a98 a2=4000026cc0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:38.540165 kernel: audit: type=1327 audit(1707472471.032:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:38.540174 kernel: audit: type=1400 audit(1707472471.043:87): avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:54:38.540184 kernel: audit: type=1300 audit(1707472471.043:87): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022889 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:38.540207 kernel: audit: type=1307 audit(1707472471.043:87): cwd="/" Feb 9 09:54:38.540218 kernel: audit: type=1302 audit(1707472471.043:87): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:38.540240 kernel: audit: type=1302 audit(1707472471.043:87): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:38.540250 kernel: audit: type=1327 audit(1707472471.043:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:38.540259 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:54:38.540269 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:54:38.540291 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:54:38.540301 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:54:38.540325 kernel: audit: type=1334 audit(1707472477.682:88): prog-id=12 op=LOAD Feb 9 09:54:38.540335 kernel: audit: type=1334 audit(1707472477.682:89): prog-id=3 op=UNLOAD Feb 9 09:54:38.540343 kernel: audit: type=1334 audit(1707472477.689:90): prog-id=13 op=LOAD Feb 9 09:54:38.540352 kernel: audit: type=1334 audit(1707472477.697:91): prog-id=14 op=LOAD Feb 9 09:54:38.540373 kernel: audit: type=1334 audit(1707472477.697:92): prog-id=4 op=UNLOAD Feb 9 09:54:38.540383 kernel: audit: type=1334 audit(1707472477.697:93): prog-id=5 op=UNLOAD Feb 9 09:54:38.540394 kernel: audit: type=1334 audit(1707472477.704:94): prog-id=15 op=LOAD Feb 9 09:54:38.540418 kernel: audit: type=1334 audit(1707472477.704:95): prog-id=12 op=UNLOAD Feb 9 09:54:38.540428 kernel: audit: type=1334 audit(1707472477.711:96): prog-id=16 op=LOAD Feb 9 09:54:38.540437 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:54:38.540446 kernel: audit: type=1334 audit(1707472477.718:97): prog-id=17 op=LOAD Feb 9 09:54:38.540469 systemd[1]: Stopped iscsiuio.service. Feb 9 09:54:38.540479 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:54:38.540501 systemd[1]: Stopped iscsid.service. Feb 9 09:54:38.540511 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:54:38.540522 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:54:38.540531 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:54:38.540554 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:54:38.540564 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:54:38.540573 systemd[1]: Created slice system-getty.slice. Feb 9 09:54:38.540582 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:54:38.540605 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:54:38.540616 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:54:38.540625 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:54:38.540650 systemd[1]: Created slice user.slice. Feb 9 09:54:38.540668 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:38.540678 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:54:38.540702 systemd[1]: Set up automount boot.automount. Feb 9 09:54:38.540712 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:54:38.540722 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:54:38.540731 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:54:38.540753 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:54:38.540765 systemd[1]: Reached target integritysetup.target. Feb 9 09:54:38.540787 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:38.540797 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:38.540806 systemd[1]: Reached target slices.target. Feb 9 09:54:38.540829 systemd[1]: Reached target swap.target. Feb 9 09:54:38.540839 systemd[1]: Reached target torcx.target. Feb 9 09:54:38.540864 systemd[1]: Reached target veritysetup.target. Feb 9 09:54:38.540874 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:54:38.540899 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:54:38.540909 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:38.540918 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:38.540941 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:38.540951 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:54:38.540961 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:54:38.540986 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:54:38.540996 systemd[1]: Mounting media.mount... Feb 9 09:54:38.541019 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:54:38.541029 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:54:38.541038 systemd[1]: Mounting tmp.mount... Feb 9 09:54:38.541048 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:54:38.541057 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:54:38.541066 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:38.541075 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:54:38.541086 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:54:38.541110 systemd[1]: Starting modprobe@drm.service... Feb 9 09:54:38.541119 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:54:38.541141 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:54:38.541151 systemd[1]: Starting modprobe@loop.service... Feb 9 09:54:38.541161 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:54:38.541186 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:54:38.541196 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:54:38.541219 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:54:38.541229 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:54:38.541239 systemd[1]: Stopped systemd-journald.service. Feb 9 09:54:38.541248 systemd[1]: systemd-journald.service: Consumed 4.030s CPU time. Feb 9 09:54:38.541271 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:38.541293 kernel: fuse: init (API version 7.34) Feb 9 09:54:38.541303 kernel: loop: module loaded Feb 9 09:54:38.541312 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:38.541321 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:54:38.541332 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:54:38.541355 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:38.541376 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:54:38.541387 systemd[1]: Stopped verity-setup.service. Feb 9 09:54:38.541396 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:54:38.541418 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:54:38.541440 systemd[1]: Mounted media.mount. Feb 9 09:54:38.541450 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:54:38.541459 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:54:38.541474 systemd-journald[1208]: Journal started Feb 9 09:54:38.541550 systemd-journald[1208]: Runtime Journal (/run/log/journal/529bb560cda14f2283ef828932c13670) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:54:29.069000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:54:29.854000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:54:29.854000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:54:29.854000 audit: BPF prog-id=10 op=LOAD Feb 9 09:54:29.854000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:54:29.854000 audit: BPF prog-id=11 op=LOAD Feb 9 09:54:29.854000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:54:31.032000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:54:31.032000 audit[1101]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227b2 a1=4000028a98 a2=4000026cc0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:31.032000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:31.043000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:54:31.043000 audit[1101]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022889 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:31.043000 audit: CWD cwd="/" Feb 9 09:54:31.043000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:31.043000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:31.043000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:54:37.682000 audit: BPF prog-id=12 op=LOAD Feb 9 09:54:37.682000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:54:37.689000 audit: BPF prog-id=13 op=LOAD Feb 9 09:54:37.697000 audit: BPF prog-id=14 op=LOAD Feb 9 09:54:37.697000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:54:37.697000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:54:37.704000 audit: BPF prog-id=15 op=LOAD Feb 9 09:54:37.704000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:54:37.711000 audit: BPF prog-id=16 op=LOAD Feb 9 09:54:37.718000 audit: BPF prog-id=17 op=LOAD Feb 9 09:54:37.718000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:54:37.718000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:54:37.725000 audit: BPF prog-id=18 op=LOAD Feb 9 09:54:37.725000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:54:37.732000 audit: BPF prog-id=19 op=LOAD Feb 9 09:54:37.738000 audit: BPF prog-id=20 op=LOAD Feb 9 09:54:37.738000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:54:37.738000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:54:37.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.766000 audit: BPF prog-id=18 op=UNLOAD Feb 9 09:54:37.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:37.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.397000 audit: BPF prog-id=21 op=LOAD Feb 9 09:54:38.397000 audit: BPF prog-id=22 op=LOAD Feb 9 09:54:38.397000 audit: BPF prog-id=23 op=LOAD Feb 9 09:54:38.397000 audit: BPF prog-id=19 op=UNLOAD Feb 9 09:54:38.398000 audit: BPF prog-id=20 op=UNLOAD Feb 9 09:54:38.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.536000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:54:38.536000 audit[1208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe6ad3460 a2=4000 a3=1 items=0 ppid=1 pid=1208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:38.536000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:54:37.680887 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:54:30.978835 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:54:37.740200 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:54:31.013883 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:54:37.740562 systemd[1]: systemd-journald.service: Consumed 4.030s CPU time. Feb 9 09:54:31.013910 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:54:31.013946 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:54:31.013956 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:54:31.014000 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:54:31.014012 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:54:31.014215 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:54:31.014247 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:54:31.014258 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:54:31.014639 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:54:31.014689 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:54:31.014707 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:54:31.014721 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:54:31.014737 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:54:31.014750 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:54:36.638698 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:36.638950 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:36.639041 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:36.639188 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:54:36.639233 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:54:36.639288 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T09:54:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:54:38.554034 systemd[1]: Started systemd-journald.service. Feb 9 09:54:38.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.554938 systemd[1]: Mounted tmp.mount. Feb 9 09:54:38.558870 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:54:38.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.564199 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:38.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.569286 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:54:38.569416 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:54:38.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.574853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:54:38.574992 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:54:38.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.580738 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:54:38.580966 systemd[1]: Finished modprobe@drm.service. Feb 9 09:54:38.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.586441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:54:38.586587 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:54:38.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.593042 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:54:38.593190 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:54:38.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.598748 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:54:38.598867 systemd[1]: Finished modprobe@loop.service. Feb 9 09:54:38.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.604388 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:54:38.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.610532 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:38.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.616588 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:54:38.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.622223 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:38.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.629889 systemd[1]: Reached target network-pre.target. Feb 9 09:54:38.637517 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:54:38.643866 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:54:38.648373 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:54:38.689231 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:54:38.695377 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:54:38.700485 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:54:38.701570 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:54:38.706328 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:54:38.707375 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:38.712319 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:54:38.717225 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:54:38.723839 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:54:38.728669 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:54:38.736532 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:54:38.737263 systemd-journald[1208]: Time spent on flushing to /var/log/journal/529bb560cda14f2283ef828932c13670 is 15.131ms for 1150 entries. Feb 9 09:54:38.737263 systemd-journald[1208]: System Journal (/var/log/journal/529bb560cda14f2283ef828932c13670) is 8.0M, max 2.6G, 2.6G free. Feb 9 09:54:38.820575 systemd-journald[1208]: Received client request to flush runtime journal. Feb 9 09:54:38.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:38.768419 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:54:38.773455 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:54:38.807561 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:38.821423 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:54:38.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:39.238306 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:54:39.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:39.966371 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:54:39.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:39.971000 audit: BPF prog-id=24 op=LOAD Feb 9 09:54:39.971000 audit: BPF prog-id=25 op=LOAD Feb 9 09:54:39.971000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:54:39.971000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:54:39.972912 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:39.990200 systemd-udevd[1225]: Using default interface naming scheme 'v252'. Feb 9 09:54:40.184509 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:40.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.193000 audit: BPF prog-id=26 op=LOAD Feb 9 09:54:40.195875 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:40.225775 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:54:40.284000 audit: BPF prog-id=27 op=LOAD Feb 9 09:54:40.284000 audit: BPF prog-id=28 op=LOAD Feb 9 09:54:40.284000 audit: BPF prog-id=29 op=LOAD Feb 9 09:54:40.286207 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:54:40.289672 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:54:40.317691 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 09:54:40.317761 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 09:54:40.322703 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 09:54:40.323000 audit[1239]: AVC avc: denied { confidentiality } for pid=1239 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:54:40.336898 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 09:54:40.337005 kernel: hv_vmbus: registering driver hv_utils Feb 9 09:54:40.337023 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 09:54:40.337107 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 09:54:40.337686 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 09:54:40.337726 kernel: hv_vmbus: registering driver hv_balloon Feb 9 09:54:40.337762 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 09:54:40.337777 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 09:54:40.083591 kernel: Console: switching to colour dummy device 80x25 Feb 9 09:54:40.137883 systemd-journald[1208]: Time jumped backwards, rotating. Feb 9 09:54:40.137961 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:40.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.323000 audit[1239]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaff6629a0 a1=aa2c a2=ffff9f0824b0 a3=aaaaff5bd010 items=12 ppid=1225 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:40.323000 audit: CWD cwd="/" Feb 9 09:54:40.323000 audit: PATH item=0 name=(null) inode=7219 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=1 name=(null) inode=10784 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=2 name=(null) inode=10784 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=3 name=(null) inode=10785 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=4 name=(null) inode=10784 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=5 name=(null) inode=10786 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=6 name=(null) inode=10784 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=7 name=(null) inode=10787 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=8 name=(null) inode=10784 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=9 name=(null) inode=10788 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=10 name=(null) inode=10784 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PATH item=11 name=(null) inode=10789 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:54:40.323000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:54:40.086473 systemd[1]: Started systemd-userdbd.service. Feb 9 09:54:40.326877 systemd-networkd[1244]: lo: Link UP Feb 9 09:54:40.326888 systemd-networkd[1244]: lo: Gained carrier Feb 9 09:54:40.327268 systemd-networkd[1244]: Enumeration completed Feb 9 09:54:40.327366 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:40.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.333320 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:54:40.357822 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:40.373464 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1230) Feb 9 09:54:40.388109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:40.396858 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:54:40.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.403451 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:54:40.425430 kernel: mlx5_core 37e6:00:02.0 enP14310s1: Link up Feb 9 09:54:40.454445 kernel: hv_netvsc 000d3af6-1a26-000d-3af6-1a26000d3af6 eth0: Data path switched to VF: enP14310s1 Feb 9 09:54:40.455292 systemd-networkd[1244]: enP14310s1: Link UP Feb 9 09:54:40.455651 systemd-networkd[1244]: eth0: Link UP Feb 9 09:54:40.455662 systemd-networkd[1244]: eth0: Gained carrier Feb 9 09:54:40.462963 systemd-networkd[1244]: enP14310s1: Gained carrier Feb 9 09:54:40.472513 systemd-networkd[1244]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:54:40.706353 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:40.755648 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:54:40.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.761219 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:40.767175 systemd[1]: Starting lvm2-activation.service... Feb 9 09:54:40.771114 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:54:40.790366 systemd[1]: Finished lvm2-activation.service. Feb 9 09:54:40.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:40.795951 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:40.801695 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:54:40.801723 systemd[1]: Reached target local-fs.target. Feb 9 09:54:40.806370 systemd[1]: Reached target machines.target. Feb 9 09:54:40.811924 systemd[1]: Starting ldconfig.service... Feb 9 09:54:40.816210 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:54:40.816275 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:40.817553 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:54:40.823264 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:54:40.830865 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:54:40.836674 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:40.836734 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:54:40.837819 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:54:40.872259 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1308 (bootctl) Feb 9 09:54:40.873683 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:54:41.359190 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:54:41.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.677550 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:54:41.708657 systemd-fsck[1316]: fsck.fat 4.2 (2021-01-31) Feb 9 09:54:41.708657 systemd-fsck[1316]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 09:54:41.710438 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:54:41.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.718183 systemd[1]: Mounting boot.mount... Feb 9 09:54:41.758840 systemd[1]: Mounted boot.mount. Feb 9 09:54:41.770869 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:54:41.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.846360 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:54:41.900936 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:54:41.901531 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:54:41.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.930260 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:54:41.990557 systemd-networkd[1244]: eth0: Gained IPv6LL Feb 9 09:54:41.996234 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:54:42.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.564315 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:54:42.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.573888 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 09:54:42.573934 kernel: audit: type=1130 audit(1707472482.569:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.575294 systemd[1]: Starting audit-rules.service... Feb 9 09:54:42.599926 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:54:42.606410 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:54:42.613000 audit: BPF prog-id=30 op=LOAD Feb 9 09:54:42.621617 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:42.622438 kernel: audit: type=1334 audit(1707472482.613:168): prog-id=30 op=LOAD Feb 9 09:54:42.631000 audit: BPF prog-id=31 op=LOAD Feb 9 09:54:42.633244 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:54:42.642804 kernel: audit: type=1334 audit(1707472482.631:169): prog-id=31 op=LOAD Feb 9 09:54:42.644952 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:54:42.714891 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:54:42.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.737731 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:54:42.738451 kernel: audit: type=1130 audit(1707472482.719:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.738000 audit[1333]: SYSTEM_BOOT pid=1333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.758308 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:54:42.767892 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:54:42.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.789162 kernel: audit: type=1127 audit(1707472482.738:171): pid=1333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.789242 kernel: audit: type=1130 audit(1707472482.767:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.790438 kernel: audit: type=1130 audit(1707472482.789:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.791316 systemd[1]: Reached target time-set.target. Feb 9 09:54:42.828929 systemd-resolved[1326]: Positive Trust Anchors: Feb 9 09:54:42.828943 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:42.828970 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:42.904004 systemd-resolved[1326]: Using system hostname 'ci-3510.3.2-a-8b452ef1bd'. Feb 9 09:54:42.905731 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:42.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.915526 systemd[1]: Reached target network.target. Feb 9 09:54:42.934506 kernel: audit: type=1130 audit(1707472482.910:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.933140 systemd[1]: Reached target network-online.target. Feb 9 09:54:42.938481 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:42.980683 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:54:42.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:43.005488 kernel: audit: type=1130 audit(1707472482.986:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:43.084000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:43.084000 audit[1343]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffc2aec50 a2=420 a3=0 items=0 ppid=1322 pid=1343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:54:43.097485 kernel: audit: type=1305 audit(1707472483.084:176): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:54:43.084000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:54:43.104449 augenrules[1343]: No rules Feb 9 09:54:43.105443 systemd[1]: Finished audit-rules.service. Feb 9 09:54:43.312460 systemd-timesyncd[1330]: Contacted time server 205.233.73.201:123 (0.flatcar.pool.ntp.org). Feb 9 09:54:43.312839 systemd-timesyncd[1330]: Initial clock synchronization to Fri 2024-02-09 09:54:43.264320 UTC. Feb 9 09:54:49.966764 ldconfig[1307]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:54:49.981048 systemd[1]: Finished ldconfig.service. Feb 9 09:54:49.987648 systemd[1]: Starting systemd-update-done.service... Feb 9 09:54:50.025709 systemd[1]: Finished systemd-update-done.service. Feb 9 09:54:50.032324 systemd[1]: Reached target sysinit.target. Feb 9 09:54:50.038463 systemd[1]: Started motdgen.path. Feb 9 09:54:50.042684 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:54:50.049910 systemd[1]: Started logrotate.timer. Feb 9 09:54:50.054035 systemd[1]: Started mdadm.timer. Feb 9 09:54:50.058121 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:54:50.063026 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:54:50.063054 systemd[1]: Reached target paths.target. Feb 9 09:54:50.067454 systemd[1]: Reached target timers.target. Feb 9 09:54:50.072614 systemd[1]: Listening on dbus.socket. Feb 9 09:54:50.077516 systemd[1]: Starting docker.socket... Feb 9 09:54:50.084499 systemd[1]: Listening on sshd.socket. Feb 9 09:54:50.089139 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:50.089644 systemd[1]: Listening on docker.socket. Feb 9 09:54:50.093956 systemd[1]: Reached target sockets.target. Feb 9 09:54:50.098432 systemd[1]: Reached target basic.target. Feb 9 09:54:50.102672 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:50.102698 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:54:50.103863 systemd[1]: Starting containerd.service... Feb 9 09:54:50.111020 systemd[1]: Starting dbus.service... Feb 9 09:54:50.116977 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:54:50.123013 systemd[1]: Starting extend-filesystems.service... Feb 9 09:54:50.128676 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:54:50.129815 systemd[1]: Starting motdgen.service... Feb 9 09:54:50.134981 systemd[1]: Started nvidia.service. Feb 9 09:54:50.141298 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:54:50.147861 systemd[1]: Starting prepare-critools.service... Feb 9 09:54:50.152895 systemd[1]: Starting prepare-helm.service... Feb 9 09:54:50.157886 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:54:50.164146 systemd[1]: Starting sshd-keygen.service... Feb 9 09:54:50.170348 systemd[1]: Starting systemd-logind.service... Feb 9 09:54:50.174682 jq[1353]: false Feb 9 09:54:50.175725 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:54:50.175787 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:54:50.176212 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:54:50.176937 systemd[1]: Starting update-engine.service... Feb 9 09:54:50.182293 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:54:50.194363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:54:50.195157 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:54:50.204158 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:54:50.204339 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:54:50.206589 jq[1372]: true Feb 9 09:54:50.206852 extend-filesystems[1354]: Found sda Feb 9 09:54:50.213716 extend-filesystems[1354]: Found sda1 Feb 9 09:54:50.213716 extend-filesystems[1354]: Found sda2 Feb 9 09:54:50.213716 extend-filesystems[1354]: Found sda3 Feb 9 09:54:50.213716 extend-filesystems[1354]: Found usr Feb 9 09:54:50.213716 extend-filesystems[1354]: Found sda4 Feb 9 09:54:50.213716 extend-filesystems[1354]: Found sda6 Feb 9 09:54:50.213716 extend-filesystems[1354]: Found sda7 Feb 9 09:54:50.213716 extend-filesystems[1354]: Found sda9 Feb 9 09:54:50.213716 extend-filesystems[1354]: Checking size of /dev/sda9 Feb 9 09:54:50.213574 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:54:50.261447 env[1381]: time="2024-02-09T09:54:50.260689451Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:54:50.213745 systemd[1]: Finished motdgen.service. Feb 9 09:54:50.278206 env[1381]: time="2024-02-09T09:54:50.278159418Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:54:50.278320 env[1381]: time="2024-02-09T09:54:50.278309032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:50.279408 env[1381]: time="2024-02-09T09:54:50.279368469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:50.279408 env[1381]: time="2024-02-09T09:54:50.279404076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:50.279666 env[1381]: time="2024-02-09T09:54:50.279636482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:50.279666 env[1381]: time="2024-02-09T09:54:50.279663147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:50.279719 env[1381]: time="2024-02-09T09:54:50.279676280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:54:50.279719 env[1381]: time="2024-02-09T09:54:50.279685581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:50.279785 env[1381]: time="2024-02-09T09:54:50.279756956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:50.280002 env[1381]: time="2024-02-09T09:54:50.279978583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:54:50.280125 env[1381]: time="2024-02-09T09:54:50.280101173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:54:50.280125 env[1381]: time="2024-02-09T09:54:50.280122968Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:54:50.280192 env[1381]: time="2024-02-09T09:54:50.280171549Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:54:50.280192 env[1381]: time="2024-02-09T09:54:50.280190351Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:54:50.294180 env[1381]: time="2024-02-09T09:54:50.294138310Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:54:50.294180 env[1381]: time="2024-02-09T09:54:50.294185254Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:54:50.294310 env[1381]: time="2024-02-09T09:54:50.294199385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:54:50.294310 env[1381]: time="2024-02-09T09:54:50.294239383Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294310 env[1381]: time="2024-02-09T09:54:50.294255750Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294310 env[1381]: time="2024-02-09T09:54:50.294269482Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294310 env[1381]: time="2024-02-09T09:54:50.294284012Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294689 env[1381]: time="2024-02-09T09:54:50.294665393Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294744 env[1381]: time="2024-02-09T09:54:50.294693656Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294744 env[1381]: time="2024-02-09T09:54:50.294709942Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294744 env[1381]: time="2024-02-09T09:54:50.294722836Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.294744 env[1381]: time="2024-02-09T09:54:50.294736528Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:54:50.294888 env[1381]: time="2024-02-09T09:54:50.294864946Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:54:50.294969 env[1381]: time="2024-02-09T09:54:50.294948455Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:54:50.295218 env[1381]: time="2024-02-09T09:54:50.295198465Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:54:50.295261 env[1381]: time="2024-02-09T09:54:50.295229601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295261 env[1381]: time="2024-02-09T09:54:50.295243014Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:54:50.295308 env[1381]: time="2024-02-09T09:54:50.295291116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295308 env[1381]: time="2024-02-09T09:54:50.295304808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295348 env[1381]: time="2024-02-09T09:54:50.295316823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295348 env[1381]: time="2024-02-09T09:54:50.295328120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295348 env[1381]: time="2024-02-09T09:54:50.295340176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295405 env[1381]: time="2024-02-09T09:54:50.295352031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295405 env[1381]: time="2024-02-09T09:54:50.295363688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295405 env[1381]: time="2024-02-09T09:54:50.295375783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295405 env[1381]: time="2024-02-09T09:54:50.295389275Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:54:50.295556 env[1381]: time="2024-02-09T09:54:50.295532263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295594 env[1381]: time="2024-02-09T09:54:50.295556733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295594 env[1381]: time="2024-02-09T09:54:50.295570385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295594 env[1381]: time="2024-02-09T09:54:50.295581682Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:54:50.295654 env[1381]: time="2024-02-09T09:54:50.295595414Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:54:50.295654 env[1381]: time="2024-02-09T09:54:50.295606232Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:54:50.295654 env[1381]: time="2024-02-09T09:54:50.295627030Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:54:50.295711 env[1381]: time="2024-02-09T09:54:50.295661000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:54:50.295909 env[1381]: time="2024-02-09T09:54:50.295856641Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.295918275Z" level=info msg="Connect containerd service" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.295948453Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.296578527Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.296832448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.296869812Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.296917714Z" level=info msg="containerd successfully booted in 0.036840s" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.303354730Z" level=info msg="Start subscribing containerd event" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.303448579Z" level=info msg="Start recovering state" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.303513726Z" level=info msg="Start event monitor" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.303534443Z" level=info msg="Start snapshots syncer" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.303545541Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:54:50.325756 env[1381]: time="2024-02-09T09:54:50.303552846Z" level=info msg="Start streaming server" Feb 9 09:54:50.326050 jq[1382]: true Feb 9 09:54:50.296998 systemd[1]: Started containerd.service. Feb 9 09:54:50.335290 systemd-logind[1369]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 09:54:50.336767 systemd-logind[1369]: New seat seat0. Feb 9 09:54:50.378295 extend-filesystems[1354]: Old size kept for /dev/sda9 Feb 9 09:54:50.393764 extend-filesystems[1354]: Found sr0 Feb 9 09:54:50.384234 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:54:50.384408 systemd[1]: Finished extend-filesystems.service. Feb 9 09:54:50.406880 tar[1375]: crictl Feb 9 09:54:50.408280 tar[1376]: linux-arm64/helm Feb 9 09:54:50.409111 tar[1374]: ./ Feb 9 09:54:50.409111 tar[1374]: ./macvlan Feb 9 09:54:50.414858 dbus-daemon[1352]: [system] SELinux support is enabled Feb 9 09:54:50.421099 dbus-daemon[1352]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:54:50.415010 systemd[1]: Started dbus.service. Feb 9 09:54:50.420587 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:54:50.420608 systemd[1]: Reached target system-config.target. Feb 9 09:54:50.430162 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:54:50.430185 systemd[1]: Reached target user-config.target. Feb 9 09:54:50.441505 systemd[1]: Started systemd-logind.service. Feb 9 09:54:50.466761 bash[1411]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:54:50.467488 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:54:50.530512 tar[1374]: ./static Feb 9 09:54:50.562764 tar[1374]: ./vlan Feb 9 09:54:50.602939 tar[1374]: ./portmap Feb 9 09:54:50.630736 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:54:50.636118 tar[1374]: ./host-local Feb 9 09:54:50.664924 tar[1374]: ./vrf Feb 9 09:54:50.695628 tar[1374]: ./bridge Feb 9 09:54:50.732695 tar[1374]: ./tuning Feb 9 09:54:50.761931 tar[1374]: ./firewall Feb 9 09:54:50.798823 tar[1374]: ./host-device Feb 9 09:54:50.868405 tar[1374]: ./sbr Feb 9 09:54:50.943622 tar[1374]: ./loopback Feb 9 09:54:50.988926 tar[1374]: ./dhcp Feb 9 09:54:50.996329 update_engine[1371]: I0209 09:54:50.978224 1371 main.cc:92] Flatcar Update Engine starting Feb 9 09:54:51.045803 systemd[1]: Started update-engine.service. Feb 9 09:54:51.054007 update_engine[1371]: I0209 09:54:51.045847 1371 update_check_scheduler.cc:74] Next update check in 7m41s Feb 9 09:54:51.058169 systemd[1]: Started locksmithd.service. Feb 9 09:54:51.149520 tar[1374]: ./ptp Feb 9 09:54:51.215679 tar[1374]: ./ipvlan Feb 9 09:54:51.279197 tar[1374]: ./bandwidth Feb 9 09:54:51.330877 tar[1376]: linux-arm64/LICENSE Feb 9 09:54:51.330980 tar[1376]: linux-arm64/README.md Feb 9 09:54:51.346730 systemd[1]: Finished prepare-helm.service. Feb 9 09:54:51.360975 systemd[1]: Finished prepare-critools.service. Feb 9 09:54:51.405860 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:54:52.480139 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:54:52.712488 sshd_keygen[1370]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:54:52.729979 systemd[1]: Finished sshd-keygen.service. Feb 9 09:54:52.736198 systemd[1]: Starting issuegen.service... Feb 9 09:54:52.741702 systemd[1]: Started waagent.service. Feb 9 09:54:52.746146 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:54:52.746305 systemd[1]: Finished issuegen.service. Feb 9 09:54:52.751827 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:54:52.773246 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:54:52.780275 systemd[1]: Started getty@tty1.service. Feb 9 09:54:52.786196 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:54:52.791825 systemd[1]: Reached target getty.target. Feb 9 09:54:52.796088 systemd[1]: Reached target multi-user.target. Feb 9 09:54:52.802089 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:54:52.813603 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:54:52.813748 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:54:52.819343 systemd[1]: Startup finished in 794ms (kernel) + 17.937s (initrd) + 24.511s (userspace) = 43.243s. Feb 9 09:54:53.539264 login[1483]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 09:54:53.556530 login[1484]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:53.681223 systemd[1]: Created slice user-500.slice. Feb 9 09:54:53.682308 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:54:53.684433 systemd-logind[1369]: New session 1 of user core. Feb 9 09:54:53.719465 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:54:53.720871 systemd[1]: Starting user@500.service... Feb 9 09:54:53.752028 (systemd)[1487]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:54:53.954240 systemd[1487]: Queued start job for default target default.target. Feb 9 09:54:53.954750 systemd[1487]: Reached target paths.target. Feb 9 09:54:53.954770 systemd[1487]: Reached target sockets.target. Feb 9 09:54:53.954780 systemd[1487]: Reached target timers.target. Feb 9 09:54:53.954790 systemd[1487]: Reached target basic.target. Feb 9 09:54:53.954888 systemd[1]: Started user@500.service. Feb 9 09:54:53.955742 systemd[1]: Started session-1.scope. Feb 9 09:54:53.956188 systemd[1487]: Reached target default.target. Feb 9 09:54:53.956333 systemd[1487]: Startup finished in 198ms. Feb 9 09:54:54.539714 login[1483]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:54:54.544017 systemd[1]: Started session-2.scope. Feb 9 09:54:54.544333 systemd-logind[1369]: New session 2 of user core. Feb 9 09:54:59.720567 waagent[1481]: 2024-02-09T09:54:59.720094Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 09:54:59.754334 waagent[1481]: 2024-02-09T09:54:59.754229Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 09:54:59.759318 waagent[1481]: 2024-02-09T09:54:59.759237Z INFO Daemon Daemon Python: 3.9.16 Feb 9 09:54:59.764467 waagent[1481]: 2024-02-09T09:54:59.764302Z INFO Daemon Daemon Run daemon Feb 9 09:54:59.768992 waagent[1481]: 2024-02-09T09:54:59.768931Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 09:54:59.786719 waagent[1481]: 2024-02-09T09:54:59.786581Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:59.802688 waagent[1481]: 2024-02-09T09:54:59.802554Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:59.813288 waagent[1481]: 2024-02-09T09:54:59.813196Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:59.818978 waagent[1481]: 2024-02-09T09:54:59.818894Z INFO Daemon Daemon Using waagent for provisioning Feb 9 09:54:59.825587 waagent[1481]: 2024-02-09T09:54:59.825511Z INFO Daemon Daemon Activate resource disk Feb 9 09:54:59.831064 waagent[1481]: 2024-02-09T09:54:59.830987Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 09:54:59.846212 waagent[1481]: 2024-02-09T09:54:59.846124Z INFO Daemon Daemon Found device: None Feb 9 09:54:59.851229 waagent[1481]: 2024-02-09T09:54:59.851146Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 09:54:59.860603 waagent[1481]: 2024-02-09T09:54:59.860511Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 09:54:59.873223 waagent[1481]: 2024-02-09T09:54:59.873150Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:54:59.879384 waagent[1481]: 2024-02-09T09:54:59.879311Z INFO Daemon Daemon Running default provisioning handler Feb 9 09:54:59.892467 waagent[1481]: 2024-02-09T09:54:59.892308Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:54:59.908378 waagent[1481]: 2024-02-09T09:54:59.908246Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:54:59.918346 waagent[1481]: 2024-02-09T09:54:59.918259Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:54:59.924026 waagent[1481]: 2024-02-09T09:54:59.923946Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 09:55:00.053158 waagent[1481]: 2024-02-09T09:55:00.052959Z INFO Daemon Daemon Successfully mounted dvd Feb 9 09:55:00.346486 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 09:55:00.398126 waagent[1481]: 2024-02-09T09:55:00.397969Z INFO Daemon Daemon Detect protocol endpoint Feb 9 09:55:00.404557 waagent[1481]: 2024-02-09T09:55:00.404470Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:55:00.411996 waagent[1481]: 2024-02-09T09:55:00.411914Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 09:55:00.419710 waagent[1481]: 2024-02-09T09:55:00.419633Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 09:55:00.426312 waagent[1481]: 2024-02-09T09:55:00.426241Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 09:55:00.433004 waagent[1481]: 2024-02-09T09:55:00.432932Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 09:55:00.574235 waagent[1481]: 2024-02-09T09:55:00.574169Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 09:55:00.582317 waagent[1481]: 2024-02-09T09:55:00.582266Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 09:55:00.588805 waagent[1481]: 2024-02-09T09:55:00.588717Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 09:55:01.373372 waagent[1481]: 2024-02-09T09:55:01.373206Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 09:55:01.393595 waagent[1481]: 2024-02-09T09:55:01.393505Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 09:55:01.399916 waagent[1481]: 2024-02-09T09:55:01.399829Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 09:55:01.477346 waagent[1481]: 2024-02-09T09:55:01.477202Z INFO Daemon Daemon Found private key matching thumbprint 575C47585D1459C00D5A0F07B441A63D325669A3 Feb 9 09:55:01.489252 waagent[1481]: 2024-02-09T09:55:01.489153Z INFO Daemon Daemon Certificate with thumbprint CCF95EB6DF56FA80DF6FFCFEF6B62AF88FBBDB5A has no matching private key. Feb 9 09:55:01.500017 waagent[1481]: 2024-02-09T09:55:01.499923Z INFO Daemon Daemon Fetch goal state completed Feb 9 09:55:01.529329 waagent[1481]: 2024-02-09T09:55:01.529270Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 1970b38a-e203-4037-9a41-a908cc6571c7 New eTag: 12915714738432533654] Feb 9 09:55:01.541716 waagent[1481]: 2024-02-09T09:55:01.541625Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:55:01.558384 waagent[1481]: 2024-02-09T09:55:01.558318Z INFO Daemon Daemon Starting provisioning Feb 9 09:55:01.564065 waagent[1481]: 2024-02-09T09:55:01.563986Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 09:55:01.569002 waagent[1481]: 2024-02-09T09:55:01.568931Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-8b452ef1bd] Feb 9 09:55:01.641480 waagent[1481]: 2024-02-09T09:55:01.641324Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-8b452ef1bd] Feb 9 09:55:01.648929 waagent[1481]: 2024-02-09T09:55:01.648835Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 09:55:01.656412 waagent[1481]: 2024-02-09T09:55:01.656326Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 09:55:01.673688 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 09:55:01.673856 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 09:55:01.673917 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 09:55:01.674138 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:55:01.679469 systemd-networkd[1244]: eth0: DHCPv6 lease lost Feb 9 09:55:01.681041 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:55:01.681205 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:55:01.683167 systemd[1]: Starting systemd-networkd.service... Feb 9 09:55:01.710647 systemd-networkd[1531]: enP14310s1: Link UP Feb 9 09:55:01.710931 systemd-networkd[1531]: enP14310s1: Gained carrier Feb 9 09:55:01.712108 systemd-networkd[1531]: eth0: Link UP Feb 9 09:55:01.712198 systemd-networkd[1531]: eth0: Gained carrier Feb 9 09:55:01.712630 systemd-networkd[1531]: lo: Link UP Feb 9 09:55:01.712703 systemd-networkd[1531]: lo: Gained carrier Feb 9 09:55:01.712994 systemd-networkd[1531]: eth0: Gained IPv6LL Feb 9 09:55:01.714280 systemd-networkd[1531]: Enumeration completed Feb 9 09:55:01.714486 systemd[1]: Started systemd-networkd.service. Feb 9 09:55:01.716186 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:55:01.721911 waagent[1481]: 2024-02-09T09:55:01.717862Z INFO Daemon Daemon Create user account if not exists Feb 9 09:55:01.724038 waagent[1481]: 2024-02-09T09:55:01.723951Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 09:55:01.733488 waagent[1481]: 2024-02-09T09:55:01.733355Z INFO Daemon Daemon Configure sudoer Feb 9 09:55:01.734050 systemd-networkd[1531]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:55:01.739735 waagent[1481]: 2024-02-09T09:55:01.739649Z INFO Daemon Daemon Configure sshd Feb 9 09:55:01.744677 waagent[1481]: 2024-02-09T09:55:01.744587Z INFO Daemon Daemon Deploy ssh public key. Feb 9 09:55:01.760525 systemd-networkd[1531]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:55:01.762555 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:55:02.984448 waagent[1481]: 2024-02-09T09:55:02.984354Z INFO Daemon Daemon Provisioning complete Feb 9 09:55:03.008206 waagent[1481]: 2024-02-09T09:55:03.008137Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 09:55:03.015779 waagent[1481]: 2024-02-09T09:55:03.015688Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 09:55:03.030381 waagent[1481]: 2024-02-09T09:55:03.030287Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 09:55:03.329060 waagent[1540]: 2024-02-09T09:55:03.328905Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 09:55:03.329781 waagent[1540]: 2024-02-09T09:55:03.329724Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:03.329915 waagent[1540]: 2024-02-09T09:55:03.329870Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:03.342222 waagent[1540]: 2024-02-09T09:55:03.342150Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 09:55:03.342406 waagent[1540]: 2024-02-09T09:55:03.342356Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 09:55:03.415005 waagent[1540]: 2024-02-09T09:55:03.414856Z INFO ExtHandler ExtHandler Found private key matching thumbprint 575C47585D1459C00D5A0F07B441A63D325669A3 Feb 9 09:55:03.415213 waagent[1540]: 2024-02-09T09:55:03.415159Z INFO ExtHandler ExtHandler Certificate with thumbprint CCF95EB6DF56FA80DF6FFCFEF6B62AF88FBBDB5A has no matching private key. Feb 9 09:55:03.415465 waagent[1540]: 2024-02-09T09:55:03.415387Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 09:55:03.428348 waagent[1540]: 2024-02-09T09:55:03.428294Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 04816aa7-77de-451d-a0f1-7a15c6e9745a New eTag: 12915714738432533654] Feb 9 09:55:03.428989 waagent[1540]: 2024-02-09T09:55:03.428930Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:55:03.511048 waagent[1540]: 2024-02-09T09:55:03.510893Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:55:03.521285 waagent[1540]: 2024-02-09T09:55:03.521199Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1540 Feb 9 09:55:03.525053 waagent[1540]: 2024-02-09T09:55:03.524982Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:55:03.526421 waagent[1540]: 2024-02-09T09:55:03.526353Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:55:03.634087 waagent[1540]: 2024-02-09T09:55:03.634016Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:55:03.634513 waagent[1540]: 2024-02-09T09:55:03.634446Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:55:03.642343 waagent[1540]: 2024-02-09T09:55:03.642274Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:55:03.642850 waagent[1540]: 2024-02-09T09:55:03.642790Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:55:03.644024 waagent[1540]: 2024-02-09T09:55:03.643960Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 09:55:03.645359 waagent[1540]: 2024-02-09T09:55:03.645288Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:55:03.646177 waagent[1540]: 2024-02-09T09:55:03.646112Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:03.646500 waagent[1540]: 2024-02-09T09:55:03.646405Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:03.647178 waagent[1540]: 2024-02-09T09:55:03.647121Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:55:03.647870 waagent[1540]: 2024-02-09T09:55:03.647800Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:55:03.648043 waagent[1540]: 2024-02-09T09:55:03.647971Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:55:03.648043 waagent[1540]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:55:03.648043 waagent[1540]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:55:03.648043 waagent[1540]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:55:03.648043 waagent[1540]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:03.648043 waagent[1540]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:03.648043 waagent[1540]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:03.648550 waagent[1540]: 2024-02-09T09:55:03.648430Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:55:03.648814 waagent[1540]: 2024-02-09T09:55:03.648755Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:55:03.650983 waagent[1540]: 2024-02-09T09:55:03.650834Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:55:03.651985 waagent[1540]: 2024-02-09T09:55:03.651918Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:55:03.652083 waagent[1540]: 2024-02-09T09:55:03.652013Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:03.652404 waagent[1540]: 2024-02-09T09:55:03.652329Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:55:03.652911 waagent[1540]: 2024-02-09T09:55:03.652843Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:03.654008 waagent[1540]: 2024-02-09T09:55:03.653929Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:55:03.657182 waagent[1540]: 2024-02-09T09:55:03.657123Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:55:03.658587 waagent[1540]: 2024-02-09T09:55:03.658519Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:55:03.667431 waagent[1540]: 2024-02-09T09:55:03.667348Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 09:55:03.668103 waagent[1540]: 2024-02-09T09:55:03.668051Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:55:03.669077 waagent[1540]: 2024-02-09T09:55:03.669009Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 09:55:03.694806 waagent[1540]: 2024-02-09T09:55:03.693717Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 09:55:03.697210 waagent[1540]: 2024-02-09T09:55:03.697138Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1531' Feb 9 09:55:03.768780 waagent[1540]: 2024-02-09T09:55:03.768652Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:55:03.768780 waagent[1540]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:55:03.768780 waagent[1540]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:55:03.768780 waagent[1540]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:1a:26 brd ff:ff:ff:ff:ff:ff Feb 9 09:55:03.768780 waagent[1540]: 3: enP14310s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:1a:26 brd ff:ff:ff:ff:ff:ff\ altname enP14310p0s2 Feb 9 09:55:03.768780 waagent[1540]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:55:03.768780 waagent[1540]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:55:03.768780 waagent[1540]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:55:03.768780 waagent[1540]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:55:03.768780 waagent[1540]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:55:03.768780 waagent[1540]: 2: eth0 inet6 fe80::20d:3aff:fef6:1a26/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:55:03.840217 waagent[1540]: 2024-02-09T09:55:03.840148Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 09:55:04.034266 waagent[1481]: 2024-02-09T09:55:04.034101Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 09:55:04.038397 waagent[1481]: 2024-02-09T09:55:04.038341Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 09:55:05.184865 waagent[1569]: 2024-02-09T09:55:05.184755Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 09:55:05.185579 waagent[1569]: 2024-02-09T09:55:05.185517Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 09:55:05.185720 waagent[1569]: 2024-02-09T09:55:05.185671Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 09:55:05.194130 waagent[1569]: 2024-02-09T09:55:05.194001Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:55:05.194600 waagent[1569]: 2024-02-09T09:55:05.194537Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:05.194760 waagent[1569]: 2024-02-09T09:55:05.194709Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:05.207624 waagent[1569]: 2024-02-09T09:55:05.207540Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 09:55:05.216149 waagent[1569]: 2024-02-09T09:55:05.216087Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 09:55:05.217209 waagent[1569]: 2024-02-09T09:55:05.217144Z INFO ExtHandler Feb 9 09:55:05.217367 waagent[1569]: 2024-02-09T09:55:05.217315Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 624d8dd6-88e0-4745-b65c-a770074cfbee eTag: 12915714738432533654 source: Fabric] Feb 9 09:55:05.218144 waagent[1569]: 2024-02-09T09:55:05.218080Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 09:55:05.219431 waagent[1569]: 2024-02-09T09:55:05.219347Z INFO ExtHandler Feb 9 09:55:05.219587 waagent[1569]: 2024-02-09T09:55:05.219536Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 09:55:05.226003 waagent[1569]: 2024-02-09T09:55:05.225948Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 09:55:05.226516 waagent[1569]: 2024-02-09T09:55:05.226463Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:55:05.249039 waagent[1569]: 2024-02-09T09:55:05.248964Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 09:55:05.327107 waagent[1569]: 2024-02-09T09:55:05.326948Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CCF95EB6DF56FA80DF6FFCFEF6B62AF88FBBDB5A', 'hasPrivateKey': False} Feb 9 09:55:05.328228 waagent[1569]: 2024-02-09T09:55:05.328162Z INFO ExtHandler Downloaded certificate {'thumbprint': '575C47585D1459C00D5A0F07B441A63D325669A3', 'hasPrivateKey': True} Feb 9 09:55:05.329352 waagent[1569]: 2024-02-09T09:55:05.329285Z INFO ExtHandler Fetch goal state completed Feb 9 09:55:05.356295 waagent[1569]: 2024-02-09T09:55:05.356217Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1569 Feb 9 09:55:05.359937 waagent[1569]: 2024-02-09T09:55:05.359862Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:55:05.361476 waagent[1569]: 2024-02-09T09:55:05.361392Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:55:05.367152 waagent[1569]: 2024-02-09T09:55:05.367083Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:55:05.367627 waagent[1569]: 2024-02-09T09:55:05.367565Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:55:05.376186 waagent[1569]: 2024-02-09T09:55:05.376114Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:55:05.376748 waagent[1569]: 2024-02-09T09:55:05.376689Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:55:05.383511 waagent[1569]: 2024-02-09T09:55:05.383364Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 09:55:05.387374 waagent[1569]: 2024-02-09T09:55:05.387301Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 09:55:05.389077 waagent[1569]: 2024-02-09T09:55:05.388994Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:55:05.389800 waagent[1569]: 2024-02-09T09:55:05.389735Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:05.390067 waagent[1569]: 2024-02-09T09:55:05.390018Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:05.390796 waagent[1569]: 2024-02-09T09:55:05.390735Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:55:05.391207 waagent[1569]: 2024-02-09T09:55:05.391152Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:55:05.391207 waagent[1569]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:55:05.391207 waagent[1569]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:55:05.391207 waagent[1569]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:55:05.391207 waagent[1569]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:05.391207 waagent[1569]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:05.391207 waagent[1569]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:55:05.394060 waagent[1569]: 2024-02-09T09:55:05.393968Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:55:05.394221 waagent[1569]: 2024-02-09T09:55:05.394126Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:55:05.396609 waagent[1569]: 2024-02-09T09:55:05.394881Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:55:05.397063 waagent[1569]: 2024-02-09T09:55:05.396866Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:55:05.397370 waagent[1569]: 2024-02-09T09:55:05.397281Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:55:05.398847 waagent[1569]: 2024-02-09T09:55:05.398771Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:55:05.399560 waagent[1569]: 2024-02-09T09:55:05.399473Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:55:05.399684 waagent[1569]: 2024-02-09T09:55:05.399604Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:55:05.400021 waagent[1569]: 2024-02-09T09:55:05.399957Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:55:05.403309 waagent[1569]: 2024-02-09T09:55:05.403151Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:55:05.407024 waagent[1569]: 2024-02-09T09:55:05.406839Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:55:05.413185 waagent[1569]: 2024-02-09T09:55:05.413097Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:55:05.413185 waagent[1569]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:55:05.413185 waagent[1569]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:55:05.413185 waagent[1569]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:1a:26 brd ff:ff:ff:ff:ff:ff Feb 9 09:55:05.413185 waagent[1569]: 3: enP14310s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:1a:26 brd ff:ff:ff:ff:ff:ff\ altname enP14310p0s2 Feb 9 09:55:05.413185 waagent[1569]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:55:05.413185 waagent[1569]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:55:05.413185 waagent[1569]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:55:05.413185 waagent[1569]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:55:05.413185 waagent[1569]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:55:05.413185 waagent[1569]: 2: eth0 inet6 fe80::20d:3aff:fef6:1a26/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:55:05.427648 waagent[1569]: 2024-02-09T09:55:05.427536Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 09:55:05.428053 waagent[1569]: 2024-02-09T09:55:05.427977Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 09:55:05.446341 waagent[1569]: 2024-02-09T09:55:05.446228Z INFO ExtHandler ExtHandler Feb 9 09:55:05.446500 waagent[1569]: 2024-02-09T09:55:05.446405Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d6e436a3-437e-41ae-a57c-86a3e02d3eeb correlation 5c97658b-3c41-42ca-bf37-2a3603c76d2d created: 2024-02-09T09:53:12.526581Z] Feb 9 09:55:05.447429 waagent[1569]: 2024-02-09T09:55:05.447353Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 09:55:05.449301 waagent[1569]: 2024-02-09T09:55:05.449235Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 09:55:05.474365 waagent[1569]: 2024-02-09T09:55:05.474268Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 09:55:05.492178 waagent[1569]: 2024-02-09T09:55:05.492103Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 11453AAF-9553-4BA5-80A3-8ABED2BD0613;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 09:55:05.657648 waagent[1569]: 2024-02-09T09:55:05.657494Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 09:55:05.657648 waagent[1569]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:05.657648 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:55:05.657648 waagent[1569]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:05.657648 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:55:05.657648 waagent[1569]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:05.657648 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:55:05.657648 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:55:05.657648 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:55:05.657648 waagent[1569]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:55:05.665886 waagent[1569]: 2024-02-09T09:55:05.665744Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 09:55:05.665886 waagent[1569]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:05.665886 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:55:05.665886 waagent[1569]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:05.665886 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:55:05.665886 waagent[1569]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:55:05.665886 waagent[1569]: pkts bytes target prot opt in out source destination Feb 9 09:55:05.665886 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:55:05.665886 waagent[1569]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:55:05.665886 waagent[1569]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:55:05.666467 waagent[1569]: 2024-02-09T09:55:05.666381Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 09:55:28.186768 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 09:55:36.582509 update_engine[1371]: I0209 09:55:36.582468 1371 update_attempter.cc:509] Updating boot flags... Feb 9 09:55:49.228629 systemd[1]: Created slice system-sshd.slice. Feb 9 09:55:49.229684 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.12.6:57442.service. Feb 9 09:55:49.871664 sshd[1661]: Accepted publickey for core from 10.200.12.6 port 57442 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:49.889968 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:49.894498 systemd[1]: Started session-3.scope. Feb 9 09:55:49.895507 systemd-logind[1369]: New session 3 of user core. Feb 9 09:55:50.240035 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.12.6:57452.service. Feb 9 09:55:50.660717 sshd[1666]: Accepted publickey for core from 10.200.12.6 port 57452 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:50.662313 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:50.666336 systemd[1]: Started session-4.scope. Feb 9 09:55:50.667457 systemd-logind[1369]: New session 4 of user core. Feb 9 09:55:50.965388 sshd[1666]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:50.967889 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:55:50.967890 systemd-logind[1369]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:55:50.968862 systemd[1]: sshd@1-10.200.20.40:22-10.200.12.6:57452.service: Deactivated successfully. Feb 9 09:55:50.969627 systemd-logind[1369]: Removed session 4. Feb 9 09:55:51.056262 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.12.6:57462.service. Feb 9 09:55:51.470088 sshd[1672]: Accepted publickey for core from 10.200.12.6 port 57462 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:51.471359 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:51.475473 systemd-logind[1369]: New session 5 of user core. Feb 9 09:55:51.475474 systemd[1]: Started session-5.scope. Feb 9 09:55:51.766619 sshd[1672]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:51.768887 systemd[1]: sshd@2-10.200.20.40:22-10.200.12.6:57462.service: Deactivated successfully. Feb 9 09:55:51.769592 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:55:51.770137 systemd-logind[1369]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:55:51.770947 systemd-logind[1369]: Removed session 5. Feb 9 09:55:51.836023 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.12.6:57470.service. Feb 9 09:55:52.256282 sshd[1678]: Accepted publickey for core from 10.200.12.6 port 57470 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:52.257563 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:52.261277 systemd-logind[1369]: New session 6 of user core. Feb 9 09:55:52.261713 systemd[1]: Started session-6.scope. Feb 9 09:55:52.560158 sshd[1678]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:52.562451 systemd[1]: sshd@3-10.200.20.40:22-10.200.12.6:57470.service: Deactivated successfully. Feb 9 09:55:52.563086 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:55:52.563727 systemd-logind[1369]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:55:52.564598 systemd-logind[1369]: Removed session 6. Feb 9 09:55:52.630785 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.12.6:57474.service. Feb 9 09:55:53.056252 sshd[1684]: Accepted publickey for core from 10.200.12.6 port 57474 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:55:53.057540 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:53.061687 systemd[1]: Started session-7.scope. Feb 9 09:55:53.062496 systemd-logind[1369]: New session 7 of user core. Feb 9 09:55:53.567091 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:55:53.567280 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:55:54.260192 systemd[1]: Starting docker.service... Feb 9 09:55:54.292464 env[1702]: time="2024-02-09T09:55:54.292401095Z" level=info msg="Starting up" Feb 9 09:55:54.293878 env[1702]: time="2024-02-09T09:55:54.293852537Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:54.293878 env[1702]: time="2024-02-09T09:55:54.293872758Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:54.293992 env[1702]: time="2024-02-09T09:55:54.293892179Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:54.293992 env[1702]: time="2024-02-09T09:55:54.293901990Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:54.295407 env[1702]: time="2024-02-09T09:55:54.295385185Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:54.295531 env[1702]: time="2024-02-09T09:55:54.295516126Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:54.295615 env[1702]: time="2024-02-09T09:55:54.295600217Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:54.295670 env[1702]: time="2024-02-09T09:55:54.295658960Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:54.395981 env[1702]: time="2024-02-09T09:55:54.395942111Z" level=info msg="Loading containers: start." Feb 9 09:55:54.535455 kernel: Initializing XFRM netlink socket Feb 9 09:55:54.558340 env[1702]: time="2024-02-09T09:55:54.558299194Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:55:54.675935 systemd-networkd[1531]: docker0: Link UP Feb 9 09:55:54.695670 env[1702]: time="2024-02-09T09:55:54.695634601Z" level=info msg="Loading containers: done." Feb 9 09:55:54.714998 env[1702]: time="2024-02-09T09:55:54.714950379Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:55:54.715154 env[1702]: time="2024-02-09T09:55:54.715134096Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:55:54.715267 env[1702]: time="2024-02-09T09:55:54.715234284Z" level=info msg="Daemon has completed initialization" Feb 9 09:55:54.745537 systemd[1]: Started docker.service. Feb 9 09:55:54.753984 env[1702]: time="2024-02-09T09:55:54.753933231Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:55:54.768605 systemd[1]: Reloading. Feb 9 09:55:54.845509 /usr/lib/systemd/system-generators/torcx-generator[1837]: time="2024-02-09T09:55:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:54.845539 /usr/lib/systemd/system-generators/torcx-generator[1837]: time="2024-02-09T09:55:54Z" level=info msg="torcx already run" Feb 9 09:55:54.889547 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:54.889565 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:54.904584 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:54.984255 systemd[1]: Started kubelet.service. Feb 9 09:55:55.048505 kubelet[1890]: E0209 09:55:55.048443 1890 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:55:55.050945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:55.051063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:59.303272 env[1381]: time="2024-02-09T09:55:59.303230582Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:56:00.141640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820513604.mount: Deactivated successfully. Feb 9 09:56:01.837324 env[1381]: time="2024-02-09T09:56:01.837266641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.847107 env[1381]: time="2024-02-09T09:56:01.847051186Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.852827 env[1381]: time="2024-02-09T09:56:01.852791212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.859099 env[1381]: time="2024-02-09T09:56:01.859066795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:01.859957 env[1381]: time="2024-02-09T09:56:01.859915951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:56:01.869667 env[1381]: time="2024-02-09T09:56:01.869625669Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:56:03.577673 env[1381]: time="2024-02-09T09:56:03.577622438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.584475 env[1381]: time="2024-02-09T09:56:03.584442672Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.587931 env[1381]: time="2024-02-09T09:56:03.587897346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.591337 env[1381]: time="2024-02-09T09:56:03.591300057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:03.592109 env[1381]: time="2024-02-09T09:56:03.592082597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:56:03.600865 env[1381]: time="2024-02-09T09:56:03.600827815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:56:04.849034 env[1381]: time="2024-02-09T09:56:04.848975717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.857499 env[1381]: time="2024-02-09T09:56:04.857465253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.861507 env[1381]: time="2024-02-09T09:56:04.861464139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.869776 env[1381]: time="2024-02-09T09:56:04.869734975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.870559 env[1381]: time="2024-02-09T09:56:04.870531950Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:56:04.878943 env[1381]: time="2024-02-09T09:56:04.878912556Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:56:05.113667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:56:05.113836 systemd[1]: Stopped kubelet.service. Feb 9 09:56:05.115268 systemd[1]: Started kubelet.service. Feb 9 09:56:05.162829 kubelet[1924]: E0209 09:56:05.162766 1924 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:56:05.165892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:05.166010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:06.007364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800521092.mount: Deactivated successfully. Feb 9 09:56:06.745099 env[1381]: time="2024-02-09T09:56:06.745042463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.754031 env[1381]: time="2024-02-09T09:56:06.753989440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.758393 env[1381]: time="2024-02-09T09:56:06.758364492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.761487 env[1381]: time="2024-02-09T09:56:06.761451580Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:06.761838 env[1381]: time="2024-02-09T09:56:06.761811340Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:56:06.770469 env[1381]: time="2024-02-09T09:56:06.770434946Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:56:07.398783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634747435.mount: Deactivated successfully. Feb 9 09:56:07.423001 env[1381]: time="2024-02-09T09:56:07.422961716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.431792 env[1381]: time="2024-02-09T09:56:07.431754598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.438849 env[1381]: time="2024-02-09T09:56:07.438808238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.444098 env[1381]: time="2024-02-09T09:56:07.444071318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:07.444500 env[1381]: time="2024-02-09T09:56:07.444471462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:56:07.453229 env[1381]: time="2024-02-09T09:56:07.453194811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:56:08.588902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534418960.mount: Deactivated successfully. Feb 9 09:56:11.718097 env[1381]: time="2024-02-09T09:56:11.718045235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.731829 env[1381]: time="2024-02-09T09:56:11.731786545Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.739714 env[1381]: time="2024-02-09T09:56:11.739683724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.745899 env[1381]: time="2024-02-09T09:56:11.745871210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:11.746780 env[1381]: time="2024-02-09T09:56:11.746753535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:56:11.756733 env[1381]: time="2024-02-09T09:56:11.756708246Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:56:12.560259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553350055.mount: Deactivated successfully. Feb 9 09:56:13.002394 env[1381]: time="2024-02-09T09:56:13.002345411Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.012204 env[1381]: time="2024-02-09T09:56:13.012165061Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.018261 env[1381]: time="2024-02-09T09:56:13.018229979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.023116 env[1381]: time="2024-02-09T09:56:13.023076543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:13.023779 env[1381]: time="2024-02-09T09:56:13.023749262Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:56:15.363701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:56:15.363861 systemd[1]: Stopped kubelet.service. Feb 9 09:56:15.365243 systemd[1]: Started kubelet.service. Feb 9 09:56:15.419503 kubelet[1998]: E0209 09:56:15.419455 1998 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:56:15.421468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:56:15.421596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:56:17.514792 systemd[1]: Stopped kubelet.service. Feb 9 09:56:17.528293 systemd[1]: Reloading. Feb 9 09:56:17.591944 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2024-02-09T09:56:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:17.592635 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2024-02-09T09:56:17Z" level=info msg="torcx already run" Feb 9 09:56:17.656520 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:17.656538 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:17.671547 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:17.765062 systemd[1]: Started kubelet.service. Feb 9 09:56:17.819205 kubelet[2087]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:17.819565 kubelet[2087]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:17.819705 kubelet[2087]: I0209 09:56:17.819670 2087 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:17.820956 kubelet[2087]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:17.821039 kubelet[2087]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:18.631201 kubelet[2087]: I0209 09:56:18.631165 2087 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:56:18.631201 kubelet[2087]: I0209 09:56:18.631191 2087 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:18.631393 kubelet[2087]: I0209 09:56:18.631375 2087 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:56:18.634241 kubelet[2087]: E0209 09:56:18.634222 2087 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.634389 kubelet[2087]: I0209 09:56:18.634375 2087 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:18.635822 kubelet[2087]: W0209 09:56:18.635805 2087 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:18.636258 kubelet[2087]: I0209 09:56:18.636242 2087 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:18.636446 kubelet[2087]: I0209 09:56:18.636410 2087 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:18.636521 kubelet[2087]: I0209 09:56:18.636501 2087 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:56:18.636610 kubelet[2087]: I0209 09:56:18.636526 2087 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:56:18.636610 kubelet[2087]: I0209 09:56:18.636538 2087 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:56:18.636662 kubelet[2087]: I0209 09:56:18.636620 2087 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:18.639853 kubelet[2087]: I0209 09:56:18.639836 2087 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:56:18.639957 kubelet[2087]: I0209 09:56:18.639947 2087 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:18.640040 kubelet[2087]: I0209 09:56:18.640030 2087 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:56:18.640102 kubelet[2087]: I0209 09:56:18.640092 2087 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:18.641531 kubelet[2087]: I0209 09:56:18.641515 2087 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:18.641881 kubelet[2087]: W0209 09:56:18.641865 2087 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:56:18.642294 kubelet[2087]: I0209 09:56:18.642279 2087 server.go:1186] "Started kubelet" Feb 9 09:56:18.642515 kubelet[2087]: W0209 09:56:18.642473 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.643182 kubelet[2087]: E0209 09:56:18.643163 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.643270 kubelet[2087]: W0209 09:56:18.642849 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-8b452ef1bd&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.643340 kubelet[2087]: E0209 09:56:18.643324 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-8b452ef1bd&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.647766 kubelet[2087]: I0209 09:56:18.647747 2087 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:18.648529 kubelet[2087]: I0209 09:56:18.648511 2087 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:56:18.649887 kubelet[2087]: E0209 09:56:18.649793 2087 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-8b452ef1bd.17b2294381b12993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-8b452ef1bd", UID:"ci-3510.3.2-a-8b452ef1bd", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-8b452ef1bd"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 56, 18, 642258323, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 56, 18, 642258323, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.40:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.40:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:56:18.650004 kubelet[2087]: E0209 09:56:18.649931 2087 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:18.650004 kubelet[2087]: E0209 09:56:18.649949 2087 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:18.657425 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:56:18.657682 kubelet[2087]: I0209 09:56:18.657659 2087 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:18.660950 kubelet[2087]: E0209 09:56:18.660917 2087 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-8b452ef1bd\" not found" Feb 9 09:56:18.660950 kubelet[2087]: I0209 09:56:18.660953 2087 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:56:18.661049 kubelet[2087]: I0209 09:56:18.661010 2087 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:18.661385 kubelet[2087]: W0209 09:56:18.661337 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.661385 kubelet[2087]: E0209 09:56:18.661384 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.661613 kubelet[2087]: E0209 09:56:18.661581 2087 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8b452ef1bd?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.755665 kubelet[2087]: I0209 09:56:18.755577 2087 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:18.755665 kubelet[2087]: I0209 09:56:18.755670 2087 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:18.755818 kubelet[2087]: I0209 09:56:18.755701 2087 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:18.760463 kubelet[2087]: I0209 09:56:18.760437 2087 policy_none.go:49] "None policy: Start" Feb 9 09:56:18.761059 kubelet[2087]: I0209 09:56:18.761035 2087 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:18.761059 kubelet[2087]: I0209 09:56:18.761061 2087 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:18.762206 kubelet[2087]: I0209 09:56:18.762189 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:18.762650 kubelet[2087]: E0209 09:56:18.762636 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:18.770898 systemd[1]: Created slice kubepods.slice. Feb 9 09:56:18.775234 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:56:18.785453 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:56:18.786973 kubelet[2087]: I0209 09:56:18.786947 2087 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:18.787172 kubelet[2087]: I0209 09:56:18.787151 2087 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:18.788704 kubelet[2087]: E0209 09:56:18.788685 2087 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-8b452ef1bd\" not found" Feb 9 09:56:18.836085 kubelet[2087]: I0209 09:56:18.836061 2087 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:56:18.862651 kubelet[2087]: E0209 09:56:18.862618 2087 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8b452ef1bd?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.870780 kubelet[2087]: I0209 09:56:18.870753 2087 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:56:18.870780 kubelet[2087]: I0209 09:56:18.870782 2087 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:56:18.870898 kubelet[2087]: I0209 09:56:18.870797 2087 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:56:18.870898 kubelet[2087]: E0209 09:56:18.870838 2087 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:56:18.872015 kubelet[2087]: W0209 09:56:18.871980 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.872101 kubelet[2087]: E0209 09:56:18.872020 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:18.964356 kubelet[2087]: I0209 09:56:18.964254 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:18.965754 kubelet[2087]: E0209 09:56:18.965733 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:18.971923 kubelet[2087]: I0209 09:56:18.971908 2087 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:18.973115 kubelet[2087]: I0209 09:56:18.973091 2087 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:18.974300 kubelet[2087]: I0209 09:56:18.974278 2087 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:18.975904 kubelet[2087]: I0209 09:56:18.975883 2087 status_manager.go:698] "Failed to get status for pod" podUID=a4ef167dc389e99a51ac4f111759117d pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" err="Get \"https://10.200.20.40:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-8b452ef1bd\": dial tcp 10.200.20.40:6443: connect: connection refused" Feb 9 09:56:18.977441 kubelet[2087]: I0209 09:56:18.977291 2087 status_manager.go:698] "Failed to get status for pod" podUID=a43b8da78456c6f3fb0a5ffc0840fe14 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" err="Get \"https://10.200.20.40:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\": dial tcp 10.200.20.40:6443: connect: connection refused" Feb 9 09:56:18.978328 kubelet[2087]: I0209 09:56:18.978291 2087 status_manager.go:698] "Failed to get status for pod" podUID=53b7be8472114445b2810e8198f8de10 pod="kube-system/kube-scheduler-ci-3510.3.2-a-8b452ef1bd" err="Get \"https://10.200.20.40:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-8b452ef1bd\": dial tcp 10.200.20.40:6443: connect: connection refused" Feb 9 09:56:18.980187 systemd[1]: Created slice kubepods-burstable-poda4ef167dc389e99a51ac4f111759117d.slice. Feb 9 09:56:18.993067 systemd[1]: Created slice kubepods-burstable-poda43b8da78456c6f3fb0a5ffc0840fe14.slice. Feb 9 09:56:18.996846 systemd[1]: Created slice kubepods-burstable-pod53b7be8472114445b2810e8198f8de10.slice. Feb 9 09:56:19.062462 kubelet[2087]: I0209 09:56:19.062411 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062462 kubelet[2087]: I0209 09:56:19.062469 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4ef167dc389e99a51ac4f111759117d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a4ef167dc389e99a51ac4f111759117d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062623 kubelet[2087]: I0209 09:56:19.062492 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4ef167dc389e99a51ac4f111759117d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a4ef167dc389e99a51ac4f111759117d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062623 kubelet[2087]: I0209 09:56:19.062511 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062623 kubelet[2087]: I0209 09:56:19.062543 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062623 kubelet[2087]: I0209 09:56:19.062563 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062623 kubelet[2087]: I0209 09:56:19.062582 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4ef167dc389e99a51ac4f111759117d-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a4ef167dc389e99a51ac4f111759117d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062739 kubelet[2087]: I0209 09:56:19.062603 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.062739 kubelet[2087]: I0209 09:56:19.062623 2087 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53b7be8472114445b2810e8198f8de10-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-8b452ef1bd\" (UID: \"53b7be8472114445b2810e8198f8de10\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.264222 kubelet[2087]: E0209 09:56:19.264119 2087 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8b452ef1bd?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:19.293315 env[1381]: time="2024-02-09T09:56:19.293018336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-8b452ef1bd,Uid:a4ef167dc389e99a51ac4f111759117d,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:19.295606 env[1381]: time="2024-02-09T09:56:19.295452509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-8b452ef1bd,Uid:a43b8da78456c6f3fb0a5ffc0840fe14,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:19.298997 env[1381]: time="2024-02-09T09:56:19.298852307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-8b452ef1bd,Uid:53b7be8472114445b2810e8198f8de10,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:19.368643 kubelet[2087]: I0209 09:56:19.368613 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.368919 kubelet[2087]: E0209 09:56:19.368899 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:19.685216 kubelet[2087]: W0209 09:56:19.685160 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:19.685216 kubelet[2087]: E0209 09:56:19.685217 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:19.868269 kubelet[2087]: W0209 09:56:19.868203 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-8b452ef1bd&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:19.868269 kubelet[2087]: E0209 09:56:19.868271 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-8b452ef1bd&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:19.951599 kubelet[2087]: W0209 09:56:19.951484 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:19.951599 kubelet[2087]: E0209 09:56:19.951519 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:20.024611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1721902044.mount: Deactivated successfully. Feb 9 09:56:20.061009 env[1381]: time="2024-02-09T09:56:20.060940024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.065441 kubelet[2087]: E0209 09:56:20.065386 2087 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8b452ef1bd?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:20.067562 env[1381]: time="2024-02-09T09:56:20.067534857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.076372 env[1381]: time="2024-02-09T09:56:20.076341589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.081039 env[1381]: time="2024-02-09T09:56:20.081004718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.092115 env[1381]: time="2024-02-09T09:56:20.092086462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.095177 env[1381]: time="2024-02-09T09:56:20.095139744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.100499 env[1381]: time="2024-02-09T09:56:20.100471321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.104023 env[1381]: time="2024-02-09T09:56:20.103984416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.112945 env[1381]: time="2024-02-09T09:56:20.112903730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.121871 env[1381]: time="2024-02-09T09:56:20.121831208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.130150 env[1381]: time="2024-02-09T09:56:20.130106006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.148273 env[1381]: time="2024-02-09T09:56:20.148230591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:20.166381 env[1381]: time="2024-02-09T09:56:20.166139296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:20.166546 env[1381]: time="2024-02-09T09:56:20.166351613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:20.166546 env[1381]: time="2024-02-09T09:56:20.166362099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:20.166829 env[1381]: time="2024-02-09T09:56:20.166785532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea34ea5e28d83d52dbe76772f6253b21ad03af2e5990e70f02b677197b6dced6 pid=2162 runtime=io.containerd.runc.v2 Feb 9 09:56:20.171706 kubelet[2087]: I0209 09:56:20.171467 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:20.171795 kubelet[2087]: E0209 09:56:20.171779 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:20.184553 systemd[1]: Started cri-containerd-ea34ea5e28d83d52dbe76772f6253b21ad03af2e5990e70f02b677197b6dced6.scope. Feb 9 09:56:20.212258 env[1381]: time="2024-02-09T09:56:20.212123988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:20.212258 env[1381]: time="2024-02-09T09:56:20.212222562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:20.212390 env[1381]: time="2024-02-09T09:56:20.212256301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:20.212961 env[1381]: time="2024-02-09T09:56:20.212486427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93b73f258319d1ad597285a0d6cbb697d851f4de9eb98ac9ca5b47735e0f3914 pid=2199 runtime=io.containerd.runc.v2 Feb 9 09:56:20.227869 systemd[1]: Started cri-containerd-93b73f258319d1ad597285a0d6cbb697d851f4de9eb98ac9ca5b47735e0f3914.scope. Feb 9 09:56:20.236263 env[1381]: time="2024-02-09T09:56:20.236118366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:20.236383 env[1381]: time="2024-02-09T09:56:20.236272571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:20.236383 env[1381]: time="2024-02-09T09:56:20.236315074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:20.236535 env[1381]: time="2024-02-09T09:56:20.236497655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7159e517c0d549a07dd4d3357dad723a60ef9bf696c2f08191d1b9bc57d0caa5 pid=2230 runtime=io.containerd.runc.v2 Feb 9 09:56:20.237549 env[1381]: time="2024-02-09T09:56:20.237502608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-8b452ef1bd,Uid:a4ef167dc389e99a51ac4f111759117d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea34ea5e28d83d52dbe76772f6253b21ad03af2e5990e70f02b677197b6dced6\"" Feb 9 09:56:20.244319 env[1381]: time="2024-02-09T09:56:20.244279782Z" level=info msg="CreateContainer within sandbox \"ea34ea5e28d83d52dbe76772f6253b21ad03af2e5990e70f02b677197b6dced6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:56:20.261531 systemd[1]: Started cri-containerd-7159e517c0d549a07dd4d3357dad723a60ef9bf696c2f08191d1b9bc57d0caa5.scope. Feb 9 09:56:20.267341 kubelet[2087]: W0209 09:56:20.267256 2087 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:20.267341 kubelet[2087]: E0209 09:56:20.267311 2087 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 09:56:20.275813 env[1381]: time="2024-02-09T09:56:20.275751639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-8b452ef1bd,Uid:53b7be8472114445b2810e8198f8de10,Namespace:kube-system,Attempt:0,} returns sandbox id \"93b73f258319d1ad597285a0d6cbb697d851f4de9eb98ac9ca5b47735e0f3914\"" Feb 9 09:56:20.280735 env[1381]: time="2024-02-09T09:56:20.280704007Z" level=info msg="CreateContainer within sandbox \"93b73f258319d1ad597285a0d6cbb697d851f4de9eb98ac9ca5b47735e0f3914\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:56:20.303322 env[1381]: time="2024-02-09T09:56:20.303284566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-8b452ef1bd,Uid:a43b8da78456c6f3fb0a5ffc0840fe14,Namespace:kube-system,Attempt:0,} returns sandbox id \"7159e517c0d549a07dd4d3357dad723a60ef9bf696c2f08191d1b9bc57d0caa5\"" Feb 9 09:56:20.308451 env[1381]: time="2024-02-09T09:56:20.307977111Z" level=info msg="CreateContainer within sandbox \"ea34ea5e28d83d52dbe76772f6253b21ad03af2e5990e70f02b677197b6dced6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"553fe3d3e42d6bb05cbbfceb9caab0056368b72c15ad9ceb10eb61cf201441f8\"" Feb 9 09:56:20.309727 env[1381]: time="2024-02-09T09:56:20.309694737Z" level=info msg="StartContainer for \"553fe3d3e42d6bb05cbbfceb9caab0056368b72c15ad9ceb10eb61cf201441f8\"" Feb 9 09:56:20.312776 env[1381]: time="2024-02-09T09:56:20.312740215Z" level=info msg="CreateContainer within sandbox \"7159e517c0d549a07dd4d3357dad723a60ef9bf696c2f08191d1b9bc57d0caa5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:56:20.328975 systemd[1]: Started cri-containerd-553fe3d3e42d6bb05cbbfceb9caab0056368b72c15ad9ceb10eb61cf201441f8.scope. Feb 9 09:56:20.363581 env[1381]: time="2024-02-09T09:56:20.363541720Z" level=info msg="CreateContainer within sandbox \"93b73f258319d1ad597285a0d6cbb697d851f4de9eb98ac9ca5b47735e0f3914\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332\"" Feb 9 09:56:20.364319 env[1381]: time="2024-02-09T09:56:20.364245908Z" level=info msg="StartContainer for \"8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332\"" Feb 9 09:56:20.368586 env[1381]: time="2024-02-09T09:56:20.368552640Z" level=info msg="StartContainer for \"553fe3d3e42d6bb05cbbfceb9caab0056368b72c15ad9ceb10eb61cf201441f8\" returns successfully" Feb 9 09:56:20.374715 env[1381]: time="2024-02-09T09:56:20.374683898Z" level=info msg="CreateContainer within sandbox \"7159e517c0d549a07dd4d3357dad723a60ef9bf696c2f08191d1b9bc57d0caa5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd\"" Feb 9 09:56:20.375259 env[1381]: time="2024-02-09T09:56:20.375228038Z" level=info msg="StartContainer for \"a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd\"" Feb 9 09:56:20.384350 systemd[1]: Started cri-containerd-8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332.scope. Feb 9 09:56:20.401659 systemd[1]: Started cri-containerd-a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd.scope. Feb 9 09:56:20.459907 env[1381]: time="2024-02-09T09:56:20.459864221Z" level=info msg="StartContainer for \"a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd\" returns successfully" Feb 9 09:56:20.466736 env[1381]: time="2024-02-09T09:56:20.466635031Z" level=info msg="StartContainer for \"8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332\" returns successfully" Feb 9 09:56:21.773518 kubelet[2087]: I0209 09:56:21.773494 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:22.611021 kubelet[2087]: E0209 09:56:22.610994 2087 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-8b452ef1bd\" not found" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:22.641894 kubelet[2087]: I0209 09:56:22.641861 2087 apiserver.go:52] "Watching apiserver" Feb 9 09:56:22.662175 kubelet[2087]: I0209 09:56:22.662145 2087 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:22.683932 kubelet[2087]: I0209 09:56:22.683904 2087 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:22.687632 kubelet[2087]: I0209 09:56:22.687605 2087 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:23.244912 kubelet[2087]: E0209 09:56:23.244882 2087 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:24.230376 kubelet[2087]: E0209 09:56:24.230264 2087 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-8b452ef1bd.17b2294381b12993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-8b452ef1bd", UID:"ci-3510.3.2-a-8b452ef1bd", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-8b452ef1bd"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 56, 18, 642258323, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 56, 18, 642258323, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:56:24.297109 kubelet[2087]: E0209 09:56:24.296997 2087 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-8b452ef1bd.17b2294382266522", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-8b452ef1bd", UID:"ci-3510.3.2-a-8b452ef1bd", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-8b452ef1bd"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 56, 18, 649941282, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 56, 18, 649941282, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 09:56:25.892797 systemd[1]: Reloading. Feb 9 09:56:25.989522 /usr/lib/systemd/system-generators/torcx-generator[2415]: time="2024-02-09T09:56:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:56:25.989550 /usr/lib/systemd/system-generators/torcx-generator[2415]: time="2024-02-09T09:56:25Z" level=info msg="torcx already run" Feb 9 09:56:26.087725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:26.087880 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:26.107064 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:26.230346 kubelet[2087]: I0209 09:56:26.230207 2087 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:26.233518 systemd[1]: Stopping kubelet.service... Feb 9 09:56:26.244852 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:56:26.245260 systemd[1]: Stopped kubelet.service. Feb 9 09:56:26.245385 systemd[1]: kubelet.service: Consumed 1.190s CPU time. Feb 9 09:56:26.247449 systemd[1]: Started kubelet.service. Feb 9 09:56:26.315873 kubelet[2474]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:26.315873 kubelet[2474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:26.318707 kubelet[2474]: I0209 09:56:26.315912 2474 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:56:26.318707 kubelet[2474]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:56:26.318707 kubelet[2474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:56:26.322116 kubelet[2474]: I0209 09:56:26.322096 2474 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:56:26.322238 kubelet[2474]: I0209 09:56:26.322227 2474 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:56:26.322494 kubelet[2474]: I0209 09:56:26.322471 2474 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:56:26.325625 kubelet[2474]: I0209 09:56:26.325608 2474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:56:26.329010 kubelet[2474]: W0209 09:56:26.328995 2474 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:56:26.329614 kubelet[2474]: I0209 09:56:26.329597 2474 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:56:26.329849 kubelet[2474]: I0209 09:56:26.329838 2474 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:56:26.329973 kubelet[2474]: I0209 09:56:26.329962 2474 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:56:26.330082 kubelet[2474]: I0209 09:56:26.330071 2474 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:56:26.330142 kubelet[2474]: I0209 09:56:26.330133 2474 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:56:26.330264 kubelet[2474]: I0209 09:56:26.330254 2474 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:26.330787 kubelet[2474]: I0209 09:56:26.330764 2474 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:56:26.333974 kubelet[2474]: I0209 09:56:26.333958 2474 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:56:26.334928 kubelet[2474]: I0209 09:56:26.334914 2474 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:56:26.335052 kubelet[2474]: I0209 09:56:26.335043 2474 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:56:26.338978 kubelet[2474]: I0209 09:56:26.338962 2474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:56:26.339761 kubelet[2474]: I0209 09:56:26.339746 2474 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:56:26.340219 kubelet[2474]: I0209 09:56:26.340203 2474 server.go:1186] "Started kubelet" Feb 9 09:56:26.344608 kubelet[2474]: I0209 09:56:26.344591 2474 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:56:26.360472 kubelet[2474]: I0209 09:56:26.359902 2474 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:56:26.362012 kubelet[2474]: I0209 09:56:26.361694 2474 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:56:26.366377 kubelet[2474]: E0209 09:56:26.366320 2474 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:56:26.366377 kubelet[2474]: E0209 09:56:26.366352 2474 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:56:26.366377 kubelet[2474]: I0209 09:56:26.366383 2474 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:56:26.370678 kubelet[2474]: I0209 09:56:26.370649 2474 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:56:26.392508 sudo[2492]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:56:26.392996 sudo[2492]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:56:26.404832 kubelet[2474]: I0209 09:56:26.404791 2474 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:56:26.446034 kubelet[2474]: I0209 09:56:26.446004 2474 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:56:26.446034 kubelet[2474]: I0209 09:56:26.446027 2474 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:56:26.446183 kubelet[2474]: I0209 09:56:26.446047 2474 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:56:26.446183 kubelet[2474]: E0209 09:56:26.446093 2474 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:56:26.458125 kubelet[2474]: I0209 09:56:26.458087 2474 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:56:26.458125 kubelet[2474]: I0209 09:56:26.458112 2474 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:56:26.458125 kubelet[2474]: I0209 09:56:26.458128 2474 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:56:26.458284 kubelet[2474]: I0209 09:56:26.458263 2474 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:56:26.458284 kubelet[2474]: I0209 09:56:26.458275 2474 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:56:26.458284 kubelet[2474]: I0209 09:56:26.458281 2474 policy_none.go:49] "None policy: Start" Feb 9 09:56:26.459165 kubelet[2474]: I0209 09:56:26.459140 2474 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:56:26.459165 kubelet[2474]: I0209 09:56:26.459167 2474 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:56:26.459297 kubelet[2474]: I0209 09:56:26.459278 2474 state_mem.go:75] "Updated machine memory state" Feb 9 09:56:26.462008 kubelet[2474]: I0209 09:56:26.461977 2474 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.470852 kubelet[2474]: I0209 09:56:26.470822 2474 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:56:26.471057 kubelet[2474]: I0209 09:56:26.471035 2474 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:56:26.478978 kubelet[2474]: I0209 09:56:26.478942 2474 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.479054 kubelet[2474]: I0209 09:56:26.479016 2474 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.546871 kubelet[2474]: I0209 09:56:26.546766 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:26.546871 kubelet[2474]: I0209 09:56:26.546872 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:26.547014 kubelet[2474]: I0209 09:56:26.546918 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:26.565598 kubelet[2474]: I0209 09:56:26.565554 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4ef167dc389e99a51ac4f111759117d-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a4ef167dc389e99a51ac4f111759117d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.565929 kubelet[2474]: I0209 09:56:26.565917 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4ef167dc389e99a51ac4f111759117d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a4ef167dc389e99a51ac4f111759117d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.566051 kubelet[2474]: I0209 09:56:26.566041 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.566158 kubelet[2474]: I0209 09:56:26.566148 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.566274 kubelet[2474]: I0209 09:56:26.566265 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53b7be8472114445b2810e8198f8de10-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-8b452ef1bd\" (UID: \"53b7be8472114445b2810e8198f8de10\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.566378 kubelet[2474]: I0209 09:56:26.566369 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4ef167dc389e99a51ac4f111759117d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a4ef167dc389e99a51ac4f111759117d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.566503 kubelet[2474]: I0209 09:56:26.566492 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.566603 kubelet[2474]: I0209 09:56:26.566594 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.566713 kubelet[2474]: I0209 09:56:26.566703 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a43b8da78456c6f3fb0a5ffc0840fe14-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-8b452ef1bd\" (UID: \"a43b8da78456c6f3fb0a5ffc0840fe14\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.578788 kubelet[2474]: E0209 09:56:26.578768 2474 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:26.951856 sudo[2492]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:27.339914 kubelet[2474]: I0209 09:56:27.339818 2474 apiserver.go:52] "Watching apiserver" Feb 9 09:56:27.362065 kubelet[2474]: I0209 09:56:27.362033 2474 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:56:27.372434 kubelet[2474]: I0209 09:56:27.372399 2474 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:56:27.555019 kubelet[2474]: E0209 09:56:27.554991 2474 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-8b452ef1bd\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:27.946388 kubelet[2474]: E0209 09:56:27.946350 2474 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-8b452ef1bd\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-8b452ef1bd" Feb 9 09:56:28.294226 sudo[1687]: pam_unix(sudo:session): session closed for user root Feb 9 09:56:28.360980 sshd[1684]: pam_unix(sshd:session): session closed for user core Feb 9 09:56:28.363891 systemd[1]: sshd@4-10.200.20.40:22-10.200.12.6:57474.service: Deactivated successfully. Feb 9 09:56:28.364079 systemd-logind[1369]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:56:28.364672 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:56:28.364844 systemd[1]: session-7.scope: Consumed 5.693s CPU time. Feb 9 09:56:28.366091 systemd-logind[1369]: Removed session 7. Feb 9 09:56:28.541478 kubelet[2474]: I0209 09:56:28.541435 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-8b452ef1bd" podStartSLOduration=5.54136165 pod.CreationTimestamp="2024-02-09 09:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:28.145721304 +0000 UTC m=+1.894750508" watchObservedRunningTime="2024-02-09 09:56:28.54136165 +0000 UTC m=+2.290390854" Feb 9 09:56:28.940219 kubelet[2474]: I0209 09:56:28.940166 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-8b452ef1bd" podStartSLOduration=2.940126589 pod.CreationTimestamp="2024-02-09 09:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:28.5412971 +0000 UTC m=+2.290326304" watchObservedRunningTime="2024-02-09 09:56:28.940126589 +0000 UTC m=+2.689155793" Feb 9 09:56:28.940396 kubelet[2474]: I0209 09:56:28.940325 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-8b452ef1bd" podStartSLOduration=2.940307352 pod.CreationTimestamp="2024-02-09 09:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:28.93960611 +0000 UTC m=+2.688635314" watchObservedRunningTime="2024-02-09 09:56:28.940307352 +0000 UTC m=+2.689336556" Feb 9 09:56:38.189166 kubelet[2474]: I0209 09:56:38.189123 2474 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:56:38.189866 env[1381]: time="2024-02-09T09:56:38.189782443Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:56:38.190317 kubelet[2474]: I0209 09:56:38.190226 2474 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:56:38.661382 kubelet[2474]: I0209 09:56:38.661354 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:38.665930 systemd[1]: Created slice kubepods-besteffort-podc8e132e9_4edd_4beb_8b41_7bbae5738261.slice. Feb 9 09:56:38.694348 kubelet[2474]: I0209 09:56:38.692154 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:38.703802 systemd[1]: Created slice kubepods-burstable-pod604b4494_3b22_41b2_b21f_a6a8e0a0b6c7.slice. Feb 9 09:56:38.727231 kubelet[2474]: I0209 09:56:38.727195 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hubble-tls\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727375 kubelet[2474]: I0209 09:56:38.727242 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8e132e9-4edd-4beb-8b41-7bbae5738261-kube-proxy\") pod \"kube-proxy-dgcjs\" (UID: \"c8e132e9-4edd-4beb-8b41-7bbae5738261\") " pod="kube-system/kube-proxy-dgcjs" Feb 9 09:56:38.727375 kubelet[2474]: I0209 09:56:38.727276 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hostproc\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727375 kubelet[2474]: I0209 09:56:38.727303 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-run\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727375 kubelet[2474]: I0209 09:56:38.727326 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm7k2\" (UniqueName: \"kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-kube-api-access-vm7k2\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727375 kubelet[2474]: I0209 09:56:38.727362 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prp5k\" (UniqueName: \"kubernetes.io/projected/c8e132e9-4edd-4beb-8b41-7bbae5738261-kube-api-access-prp5k\") pod \"kube-proxy-dgcjs\" (UID: \"c8e132e9-4edd-4beb-8b41-7bbae5738261\") " pod="kube-system/kube-proxy-dgcjs" Feb 9 09:56:38.727518 kubelet[2474]: I0209 09:56:38.727384 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-cgroup\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727518 kubelet[2474]: I0209 09:56:38.727406 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-lib-modules\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727518 kubelet[2474]: I0209 09:56:38.727446 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-xtables-lock\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727518 kubelet[2474]: I0209 09:56:38.727469 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-clustermesh-secrets\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727518 kubelet[2474]: I0209 09:56:38.727489 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8e132e9-4edd-4beb-8b41-7bbae5738261-lib-modules\") pod \"kube-proxy-dgcjs\" (UID: \"c8e132e9-4edd-4beb-8b41-7bbae5738261\") " pod="kube-system/kube-proxy-dgcjs" Feb 9 09:56:38.727518 kubelet[2474]: I0209 09:56:38.727519 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-bpf-maps\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727646 kubelet[2474]: I0209 09:56:38.727541 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cni-path\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727646 kubelet[2474]: I0209 09:56:38.727565 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-config-path\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727646 kubelet[2474]: I0209 09:56:38.727602 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-net\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727646 kubelet[2474]: I0209 09:56:38.727622 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8e132e9-4edd-4beb-8b41-7bbae5738261-xtables-lock\") pod \"kube-proxy-dgcjs\" (UID: \"c8e132e9-4edd-4beb-8b41-7bbae5738261\") " pod="kube-system/kube-proxy-dgcjs" Feb 9 09:56:38.727732 kubelet[2474]: I0209 09:56:38.727647 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-etc-cni-netd\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.727732 kubelet[2474]: I0209 09:56:38.727682 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-kernel\") pod \"cilium-tblds\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " pod="kube-system/cilium-tblds" Feb 9 09:56:38.845459 kubelet[2474]: E0209 09:56:38.845434 2474 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 09:56:38.845622 kubelet[2474]: E0209 09:56:38.845611 2474 projected.go:198] Error preparing data for projected volume kube-api-access-prp5k for pod kube-system/kube-proxy-dgcjs: configmap "kube-root-ca.crt" not found Feb 9 09:56:38.845760 kubelet[2474]: E0209 09:56:38.845740 2474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8e132e9-4edd-4beb-8b41-7bbae5738261-kube-api-access-prp5k podName:c8e132e9-4edd-4beb-8b41-7bbae5738261 nodeName:}" failed. No retries permitted until 2024-02-09 09:56:39.34571934 +0000 UTC m=+13.094748544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-prp5k" (UniqueName: "kubernetes.io/projected/c8e132e9-4edd-4beb-8b41-7bbae5738261-kube-api-access-prp5k") pod "kube-proxy-dgcjs" (UID: "c8e132e9-4edd-4beb-8b41-7bbae5738261") : configmap "kube-root-ca.crt" not found Feb 9 09:56:39.007400 env[1381]: time="2024-02-09T09:56:39.007260429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tblds,Uid:604b4494-3b22-41b2-b21f-a6a8e0a0b6c7,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:39.053984 env[1381]: time="2024-02-09T09:56:39.053902955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:39.053984 env[1381]: time="2024-02-09T09:56:39.053945810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:39.054210 env[1381]: time="2024-02-09T09:56:39.053979703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:39.054265 env[1381]: time="2024-02-09T09:56:39.054243879Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b pid=2576 runtime=io.containerd.runc.v2 Feb 9 09:56:39.094899 systemd[1]: Started cri-containerd-814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b.scope. Feb 9 09:56:39.197078 env[1381]: time="2024-02-09T09:56:39.197023660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tblds,Uid:604b4494-3b22-41b2-b21f-a6a8e0a0b6c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\"" Feb 9 09:56:39.199545 env[1381]: time="2024-02-09T09:56:39.199516370Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:56:39.215986 kubelet[2474]: I0209 09:56:39.215512 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:39.220363 systemd[1]: Created slice kubepods-besteffort-podf6bc1ad8_65b4_4e39_8112_d5f6d752bffa.slice. Feb 9 09:56:39.333760 kubelet[2474]: I0209 09:56:39.333644 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnlbd\" (UniqueName: \"kubernetes.io/projected/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-kube-api-access-vnlbd\") pod \"cilium-operator-f59cbd8c6-w5szb\" (UID: \"f6bc1ad8-65b4-4e39-8112-d5f6d752bffa\") " pod="kube-system/cilium-operator-f59cbd8c6-w5szb" Feb 9 09:56:39.333760 kubelet[2474]: I0209 09:56:39.333724 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-w5szb\" (UID: \"f6bc1ad8-65b4-4e39-8112-d5f6d752bffa\") " pod="kube-system/cilium-operator-f59cbd8c6-w5szb" Feb 9 09:56:39.524203 env[1381]: time="2024-02-09T09:56:39.524165096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-w5szb,Uid:f6bc1ad8-65b4-4e39-8112-d5f6d752bffa,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:39.562049 env[1381]: time="2024-02-09T09:56:39.561840505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:39.562049 env[1381]: time="2024-02-09T09:56:39.561878319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:39.562049 env[1381]: time="2024-02-09T09:56:39.561888643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:39.562267 env[1381]: time="2024-02-09T09:56:39.562084274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff pid=2617 runtime=io.containerd.runc.v2 Feb 9 09:56:39.572426 systemd[1]: Started cri-containerd-771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff.scope. Feb 9 09:56:39.577794 env[1381]: time="2024-02-09T09:56:39.576761998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgcjs,Uid:c8e132e9-4edd-4beb-8b41-7bbae5738261,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:39.609170 env[1381]: time="2024-02-09T09:56:39.609053199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-w5szb,Uid:f6bc1ad8-65b4-4e39-8112-d5f6d752bffa,Namespace:kube-system,Attempt:0,} returns sandbox id \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\"" Feb 9 09:56:39.626177 env[1381]: time="2024-02-09T09:56:39.625978945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:39.626177 env[1381]: time="2024-02-09T09:56:39.626017319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:39.626177 env[1381]: time="2024-02-09T09:56:39.626027683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:39.626497 env[1381]: time="2024-02-09T09:56:39.626409943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8df0ca772b2dfb062c700c5c8a4dab8cfebe78536a76000bb42bdc15328abe64 pid=2660 runtime=io.containerd.runc.v2 Feb 9 09:56:39.639050 systemd[1]: Started cri-containerd-8df0ca772b2dfb062c700c5c8a4dab8cfebe78536a76000bb42bdc15328abe64.scope. Feb 9 09:56:39.663967 env[1381]: time="2024-02-09T09:56:39.663925573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgcjs,Uid:c8e132e9-4edd-4beb-8b41-7bbae5738261,Namespace:kube-system,Attempt:0,} returns sandbox id \"8df0ca772b2dfb062c700c5c8a4dab8cfebe78536a76000bb42bdc15328abe64\"" Feb 9 09:56:39.669601 env[1381]: time="2024-02-09T09:56:39.669542346Z" level=info msg="CreateContainer within sandbox \"8df0ca772b2dfb062c700c5c8a4dab8cfebe78536a76000bb42bdc15328abe64\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:56:39.714222 env[1381]: time="2024-02-09T09:56:39.714174937Z" level=info msg="CreateContainer within sandbox \"8df0ca772b2dfb062c700c5c8a4dab8cfebe78536a76000bb42bdc15328abe64\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00b30e8b2a8831290c280d69b78d0273c46cd6b6f7be02f3c481d62f6f41d782\"" Feb 9 09:56:39.715218 env[1381]: time="2024-02-09T09:56:39.715188387Z" level=info msg="StartContainer for \"00b30e8b2a8831290c280d69b78d0273c46cd6b6f7be02f3c481d62f6f41d782\"" Feb 9 09:56:39.731822 systemd[1]: Started cri-containerd-00b30e8b2a8831290c280d69b78d0273c46cd6b6f7be02f3c481d62f6f41d782.scope. Feb 9 09:56:39.777085 env[1381]: time="2024-02-09T09:56:39.777037591Z" level=info msg="StartContainer for \"00b30e8b2a8831290c280d69b78d0273c46cd6b6f7be02f3c481d62f6f41d782\" returns successfully" Feb 9 09:56:41.472231 kubelet[2474]: I0209 09:56:41.469885 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dgcjs" podStartSLOduration=3.469844795 pod.CreationTimestamp="2024-02-09 09:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:41.469621076 +0000 UTC m=+15.218650280" watchObservedRunningTime="2024-02-09 09:56:41.469844795 +0000 UTC m=+15.218873959" Feb 9 09:56:43.741539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016717049.mount: Deactivated successfully. Feb 9 09:56:45.930744 env[1381]: time="2024-02-09T09:56:45.930692555Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:45.939179 env[1381]: time="2024-02-09T09:56:45.939131431Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:45.952896 env[1381]: time="2024-02-09T09:56:45.952859915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:45.953406 env[1381]: time="2024-02-09T09:56:45.953376524Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:56:45.955660 env[1381]: time="2024-02-09T09:56:45.955633781Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:56:45.957548 env[1381]: time="2024-02-09T09:56:45.957519077Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:56:45.997946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284789447.mount: Deactivated successfully. Feb 9 09:56:46.013121 env[1381]: time="2024-02-09T09:56:46.013075515Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\"" Feb 9 09:56:46.014704 env[1381]: time="2024-02-09T09:56:46.014658502Z" level=info msg="StartContainer for \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\"" Feb 9 09:56:46.033564 systemd[1]: Started cri-containerd-bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd.scope. Feb 9 09:56:46.066826 env[1381]: time="2024-02-09T09:56:46.066230688Z" level=info msg="StartContainer for \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\" returns successfully" Feb 9 09:56:46.071786 systemd[1]: cri-containerd-bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd.scope: Deactivated successfully. Feb 9 09:56:46.992856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd-rootfs.mount: Deactivated successfully. Feb 9 09:56:47.710333 env[1381]: time="2024-02-09T09:56:47.710141023Z" level=info msg="shim disconnected" id=bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd Feb 9 09:56:47.710333 env[1381]: time="2024-02-09T09:56:47.710183437Z" level=warning msg="cleaning up after shim disconnected" id=bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd namespace=k8s.io Feb 9 09:56:47.710333 env[1381]: time="2024-02-09T09:56:47.710193600Z" level=info msg="cleaning up dead shim" Feb 9 09:56:47.716861 env[1381]: time="2024-02-09T09:56:47.716822769Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2882 runtime=io.containerd.runc.v2\n" Feb 9 09:56:48.516906 env[1381]: time="2024-02-09T09:56:48.516864581Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:56:48.551849 env[1381]: time="2024-02-09T09:56:48.551796843Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\"" Feb 9 09:56:48.554123 env[1381]: time="2024-02-09T09:56:48.554041138Z" level=info msg="StartContainer for \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\"" Feb 9 09:56:48.571662 systemd[1]: Started cri-containerd-23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c.scope. Feb 9 09:56:48.611962 env[1381]: time="2024-02-09T09:56:48.611918868Z" level=info msg="StartContainer for \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\" returns successfully" Feb 9 09:56:48.615659 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:56:48.616171 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:56:48.616358 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:56:48.619245 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:48.626749 systemd[1]: cri-containerd-23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c.scope: Deactivated successfully. Feb 9 09:56:48.628473 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:48.671706 env[1381]: time="2024-02-09T09:56:48.671660575Z" level=info msg="shim disconnected" id=23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c Feb 9 09:56:48.671925 env[1381]: time="2024-02-09T09:56:48.671907412Z" level=warning msg="cleaning up after shim disconnected" id=23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c namespace=k8s.io Feb 9 09:56:48.671991 env[1381]: time="2024-02-09T09:56:48.671971352Z" level=info msg="cleaning up dead shim" Feb 9 09:56:48.687098 env[1381]: time="2024-02-09T09:56:48.686981082Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2946 runtime=io.containerd.runc.v2\n" Feb 9 09:56:49.422601 env[1381]: time="2024-02-09T09:56:49.422555774Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:49.429851 env[1381]: time="2024-02-09T09:56:49.429814185Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:49.436407 env[1381]: time="2024-02-09T09:56:49.436361539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:49.437016 env[1381]: time="2024-02-09T09:56:49.436985489Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:56:49.439700 env[1381]: time="2024-02-09T09:56:49.439666105Z" level=info msg="CreateContainer within sandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:56:49.471196 env[1381]: time="2024-02-09T09:56:49.471152013Z" level=info msg="CreateContainer within sandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\"" Feb 9 09:56:49.473298 env[1381]: time="2024-02-09T09:56:49.472567965Z" level=info msg="StartContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\"" Feb 9 09:56:49.486047 systemd[1]: Started cri-containerd-e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303.scope. Feb 9 09:56:49.530899 env[1381]: time="2024-02-09T09:56:49.530848273Z" level=info msg="StartContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" returns successfully" Feb 9 09:56:49.535350 env[1381]: time="2024-02-09T09:56:49.535304190Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:56:49.541854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c-rootfs.mount: Deactivated successfully. Feb 9 09:56:49.586551 env[1381]: time="2024-02-09T09:56:49.586494379Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\"" Feb 9 09:56:49.587271 env[1381]: time="2024-02-09T09:56:49.587246087Z" level=info msg="StartContainer for \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\"" Feb 9 09:56:49.609937 systemd[1]: Started cri-containerd-b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c.scope. Feb 9 09:56:49.658236 systemd[1]: cri-containerd-b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c.scope: Deactivated successfully. Feb 9 09:56:49.659577 env[1381]: time="2024-02-09T09:56:49.659534782Z" level=info msg="StartContainer for \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\" returns successfully" Feb 9 09:56:50.041957 env[1381]: time="2024-02-09T09:56:50.041907740Z" level=info msg="shim disconnected" id=b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c Feb 9 09:56:50.041957 env[1381]: time="2024-02-09T09:56:50.041953114Z" level=warning msg="cleaning up after shim disconnected" id=b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c namespace=k8s.io Feb 9 09:56:50.042189 env[1381]: time="2024-02-09T09:56:50.041965918Z" level=info msg="cleaning up dead shim" Feb 9 09:56:50.053214 env[1381]: time="2024-02-09T09:56:50.053158989Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3042 runtime=io.containerd.runc.v2\n" Feb 9 09:56:50.541131 systemd[1]: run-containerd-runc-k8s.io-b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c-runc.Si2Fo1.mount: Deactivated successfully. Feb 9 09:56:50.541222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c-rootfs.mount: Deactivated successfully. Feb 9 09:56:50.552521 env[1381]: time="2024-02-09T09:56:50.552481183Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:56:50.570172 kubelet[2474]: I0209 09:56:50.570127 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-w5szb" podStartSLOduration=-9.223372025284689e+09 pod.CreationTimestamp="2024-02-09 09:56:39 +0000 UTC" firstStartedPulling="2024-02-09 09:56:39.610774428 +0000 UTC m=+13.359803592" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:49.578473696 +0000 UTC m=+23.327502900" watchObservedRunningTime="2024-02-09 09:56:50.570087415 +0000 UTC m=+24.319116619" Feb 9 09:56:50.612759 env[1381]: time="2024-02-09T09:56:50.612704936Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\"" Feb 9 09:56:50.613327 env[1381]: time="2024-02-09T09:56:50.613301595Z" level=info msg="StartContainer for \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\"" Feb 9 09:56:50.630755 systemd[1]: Started cri-containerd-b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12.scope. Feb 9 09:56:50.659493 systemd[1]: cri-containerd-b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12.scope: Deactivated successfully. Feb 9 09:56:50.663815 env[1381]: time="2024-02-09T09:56:50.661541960Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod604b4494_3b22_41b2_b21f_a6a8e0a0b6c7.slice/cri-containerd-b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12.scope/memory.events\": no such file or directory" Feb 9 09:56:50.668938 env[1381]: time="2024-02-09T09:56:50.668886839Z" level=info msg="StartContainer for \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\" returns successfully" Feb 9 09:56:50.697231 env[1381]: time="2024-02-09T09:56:50.697175630Z" level=info msg="shim disconnected" id=b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12 Feb 9 09:56:50.697231 env[1381]: time="2024-02-09T09:56:50.697225564Z" level=warning msg="cleaning up after shim disconnected" id=b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12 namespace=k8s.io Feb 9 09:56:50.697231 env[1381]: time="2024-02-09T09:56:50.697235647Z" level=info msg="cleaning up dead shim" Feb 9 09:56:50.704660 env[1381]: time="2024-02-09T09:56:50.704613297Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3099 runtime=io.containerd.runc.v2\n" Feb 9 09:56:51.541151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12-rootfs.mount: Deactivated successfully. Feb 9 09:56:51.557615 env[1381]: time="2024-02-09T09:56:51.557567364Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:56:51.603090 env[1381]: time="2024-02-09T09:56:51.603029593Z" level=info msg="CreateContainer within sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\"" Feb 9 09:56:51.603806 env[1381]: time="2024-02-09T09:56:51.603766290Z" level=info msg="StartContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\"" Feb 9 09:56:51.629240 systemd[1]: Started cri-containerd-fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d.scope. Feb 9 09:56:51.683600 env[1381]: time="2024-02-09T09:56:51.683539543Z" level=info msg="StartContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" returns successfully" Feb 9 09:56:51.795434 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:51.877961 kubelet[2474]: I0209 09:56:51.877931 2474 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:56:51.918936 kubelet[2474]: I0209 09:56:51.918624 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:51.918936 kubelet[2474]: I0209 09:56:51.918788 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:51.924167 systemd[1]: Created slice kubepods-burstable-podc8417009_e437_48a3_95f5_4667cf89d374.slice. Feb 9 09:56:51.928657 systemd[1]: Created slice kubepods-burstable-podb0a4e15f_3f76_4b07_b3ed_7fe29eff5a34.slice. Feb 9 09:56:51.933898 kubelet[2474]: W0209 09:56:51.933873 2474 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-8b452ef1bd" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-8b452ef1bd' and this object Feb 9 09:56:51.934074 kubelet[2474]: E0209 09:56:51.934063 2474 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-8b452ef1bd" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-8b452ef1bd' and this object Feb 9 09:56:52.010170 kubelet[2474]: I0209 09:56:52.010142 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8417009-e437-48a3-95f5-4667cf89d374-config-volume\") pod \"coredns-787d4945fb-km46m\" (UID: \"c8417009-e437-48a3-95f5-4667cf89d374\") " pod="kube-system/coredns-787d4945fb-km46m" Feb 9 09:56:52.010441 kubelet[2474]: I0209 09:56:52.010402 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrl7\" (UniqueName: \"kubernetes.io/projected/b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34-kube-api-access-tnrl7\") pod \"coredns-787d4945fb-hhbdp\" (UID: \"b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34\") " pod="kube-system/coredns-787d4945fb-hhbdp" Feb 9 09:56:52.010609 kubelet[2474]: I0209 09:56:52.010596 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlq7\" (UniqueName: \"kubernetes.io/projected/c8417009-e437-48a3-95f5-4667cf89d374-kube-api-access-ghlq7\") pod \"coredns-787d4945fb-km46m\" (UID: \"c8417009-e437-48a3-95f5-4667cf89d374\") " pod="kube-system/coredns-787d4945fb-km46m" Feb 9 09:56:52.010747 kubelet[2474]: I0209 09:56:52.010736 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34-config-volume\") pod \"coredns-787d4945fb-hhbdp\" (UID: \"b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34\") " pod="kube-system/coredns-787d4945fb-hhbdp" Feb 9 09:56:52.194446 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:53.112399 kubelet[2474]: E0209 09:56:53.112355 2474 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:53.112763 kubelet[2474]: E0209 09:56:53.112456 2474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34-config-volume podName:b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34 nodeName:}" failed. No retries permitted until 2024-02-09 09:56:53.612438153 +0000 UTC m=+27.361467357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34-config-volume") pod "coredns-787d4945fb-hhbdp" (UID: "b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:53.112763 kubelet[2474]: E0209 09:56:53.112355 2474 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:53.112763 kubelet[2474]: E0209 09:56:53.112665 2474 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8417009-e437-48a3-95f5-4667cf89d374-config-volume podName:c8417009-e437-48a3-95f5-4667cf89d374 nodeName:}" failed. No retries permitted until 2024-02-09 09:56:53.612655015 +0000 UTC m=+27.361684219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c8417009-e437-48a3-95f5-4667cf89d374-config-volume") pod "coredns-787d4945fb-km46m" (UID: "c8417009-e437-48a3-95f5-4667cf89d374") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:56:53.727893 env[1381]: time="2024-02-09T09:56:53.727854077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-km46m,Uid:c8417009-e437-48a3-95f5-4667cf89d374,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:53.733298 env[1381]: time="2024-02-09T09:56:53.733255017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hhbdp,Uid:b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:53.845133 systemd-networkd[1531]: cilium_host: Link UP Feb 9 09:56:53.846610 systemd-networkd[1531]: cilium_net: Link UP Feb 9 09:56:53.859894 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:56:53.860037 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:56:53.863181 systemd-networkd[1531]: cilium_net: Gained carrier Feb 9 09:56:53.864202 systemd-networkd[1531]: cilium_host: Gained carrier Feb 9 09:56:54.020801 systemd-networkd[1531]: cilium_vxlan: Link UP Feb 9 09:56:54.020808 systemd-networkd[1531]: cilium_vxlan: Gained carrier Feb 9 09:56:54.191601 systemd-networkd[1531]: cilium_net: Gained IPv6LL Feb 9 09:56:54.295445 kernel: NET: Registered PF_ALG protocol family Feb 9 09:56:54.470573 systemd-networkd[1531]: cilium_host: Gained IPv6LL Feb 9 09:56:55.004597 systemd-networkd[1531]: lxc_health: Link UP Feb 9 09:56:55.019454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:56:55.019573 systemd-networkd[1531]: lxc_health: Gained carrier Feb 9 09:56:55.046679 kubelet[2474]: I0209 09:56:55.046335 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tblds" podStartSLOduration=-9.223372019808489e+09 pod.CreationTimestamp="2024-02-09 09:56:38 +0000 UTC" firstStartedPulling="2024-02-09 09:56:39.198801749 +0000 UTC m=+12.947830913" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:52.573631571 +0000 UTC m=+26.322660815" watchObservedRunningTime="2024-02-09 09:56:55.046286738 +0000 UTC m=+28.795315942" Feb 9 09:56:55.315555 systemd-networkd[1531]: lxc18520a60b612: Link UP Feb 9 09:56:55.327556 kernel: eth0: renamed from tmpb1190 Feb 9 09:56:55.346882 systemd-networkd[1531]: lxc18520a60b612: Gained carrier Feb 9 09:56:55.347523 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc18520a60b612: link becomes ready Feb 9 09:56:55.358881 systemd-networkd[1531]: lxccdf545709508: Link UP Feb 9 09:56:55.369463 kernel: eth0: renamed from tmp8e53c Feb 9 09:56:55.383650 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccdf545709508: link becomes ready Feb 9 09:56:55.383793 systemd-networkd[1531]: lxccdf545709508: Gained carrier Feb 9 09:56:55.878592 systemd-networkd[1531]: cilium_vxlan: Gained IPv6LL Feb 9 09:56:56.520692 systemd-networkd[1531]: lxccdf545709508: Gained IPv6LL Feb 9 09:56:56.710590 systemd-networkd[1531]: lxc_health: Gained IPv6LL Feb 9 09:56:56.710849 systemd-networkd[1531]: lxc18520a60b612: Gained IPv6LL Feb 9 09:56:58.939490 env[1381]: time="2024-02-09T09:56:58.937450125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:58.939490 env[1381]: time="2024-02-09T09:56:58.937496058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:58.939490 env[1381]: time="2024-02-09T09:56:58.937515583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:58.939490 env[1381]: time="2024-02-09T09:56:58.938976929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1190e9d771a9cd580b7473193b26d37c104edefd3fb01556f97209eeebc6033 pid=3653 runtime=io.containerd.runc.v2 Feb 9 09:56:58.966239 systemd[1]: run-containerd-runc-k8s.io-b1190e9d771a9cd580b7473193b26d37c104edefd3fb01556f97209eeebc6033-runc.GwW7OS.mount: Deactivated successfully. Feb 9 09:56:58.967714 systemd[1]: Started cri-containerd-b1190e9d771a9cd580b7473193b26d37c104edefd3fb01556f97209eeebc6033.scope. Feb 9 09:56:58.981320 env[1381]: time="2024-02-09T09:56:58.981236845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:58.981320 env[1381]: time="2024-02-09T09:56:58.981289379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:58.981603 env[1381]: time="2024-02-09T09:56:58.981300822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:58.981766 env[1381]: time="2024-02-09T09:56:58.981735617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e53c3151204d0a9ad818aa8c420eb3fb3b7a491f1ee5253c6a8741a2a4b0100 pid=3684 runtime=io.containerd.runc.v2 Feb 9 09:56:59.008101 systemd[1]: Started cri-containerd-8e53c3151204d0a9ad818aa8c420eb3fb3b7a491f1ee5253c6a8741a2a4b0100.scope. Feb 9 09:56:59.026461 env[1381]: time="2024-02-09T09:56:59.026397228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-km46m,Uid:c8417009-e437-48a3-95f5-4667cf89d374,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1190e9d771a9cd580b7473193b26d37c104edefd3fb01556f97209eeebc6033\"" Feb 9 09:56:59.029792 env[1381]: time="2024-02-09T09:56:59.029752221Z" level=info msg="CreateContainer within sandbox \"b1190e9d771a9cd580b7473193b26d37c104edefd3fb01556f97209eeebc6033\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:56:59.066443 env[1381]: time="2024-02-09T09:56:59.066363306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hhbdp,Uid:b0a4e15f-3f76-4b07-b3ed-7fe29eff5a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e53c3151204d0a9ad818aa8c420eb3fb3b7a491f1ee5253c6a8741a2a4b0100\"" Feb 9 09:56:59.070474 env[1381]: time="2024-02-09T09:56:59.068963222Z" level=info msg="CreateContainer within sandbox \"8e53c3151204d0a9ad818aa8c420eb3fb3b7a491f1ee5253c6a8741a2a4b0100\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:56:59.072197 env[1381]: time="2024-02-09T09:56:59.072162055Z" level=info msg="CreateContainer within sandbox \"b1190e9d771a9cd580b7473193b26d37c104edefd3fb01556f97209eeebc6033\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d266f7e3d9c01c1471466335acaf0ecaec5d206be97431c32eabf23ec0c83016\"" Feb 9 09:56:59.072820 env[1381]: time="2024-02-09T09:56:59.072791498Z" level=info msg="StartContainer for \"d266f7e3d9c01c1471466335acaf0ecaec5d206be97431c32eabf23ec0c83016\"" Feb 9 09:56:59.093715 systemd[1]: Started cri-containerd-d266f7e3d9c01c1471466335acaf0ecaec5d206be97431c32eabf23ec0c83016.scope. Feb 9 09:56:59.114676 env[1381]: time="2024-02-09T09:56:59.114623782Z" level=info msg="CreateContainer within sandbox \"8e53c3151204d0a9ad818aa8c420eb3fb3b7a491f1ee5253c6a8741a2a4b0100\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66b5d2860810dfb5789bf6c3aed77b0946ca82cca38c5a77249072681a3f088f\"" Feb 9 09:56:59.115750 env[1381]: time="2024-02-09T09:56:59.115712585Z" level=info msg="StartContainer for \"66b5d2860810dfb5789bf6c3aed77b0946ca82cca38c5a77249072681a3f088f\"" Feb 9 09:56:59.140407 env[1381]: time="2024-02-09T09:56:59.140356477Z" level=info msg="StartContainer for \"d266f7e3d9c01c1471466335acaf0ecaec5d206be97431c32eabf23ec0c83016\" returns successfully" Feb 9 09:56:59.148182 systemd[1]: Started cri-containerd-66b5d2860810dfb5789bf6c3aed77b0946ca82cca38c5a77249072681a3f088f.scope. Feb 9 09:56:59.187169 env[1381]: time="2024-02-09T09:56:59.187105839Z" level=info msg="StartContainer for \"66b5d2860810dfb5789bf6c3aed77b0946ca82cca38c5a77249072681a3f088f\" returns successfully" Feb 9 09:56:59.582821 kubelet[2474]: I0209 09:56:59.582789 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-hhbdp" podStartSLOduration=20.582755215 pod.CreationTimestamp="2024-02-09 09:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:59.581408665 +0000 UTC m=+33.330437869" watchObservedRunningTime="2024-02-09 09:56:59.582755215 +0000 UTC m=+33.331784419" Feb 9 09:56:59.635641 kubelet[2474]: I0209 09:56:59.635593 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-km46m" podStartSLOduration=20.635554432 pod.CreationTimestamp="2024-02-09 09:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:59.633411515 +0000 UTC m=+33.382440719" watchObservedRunningTime="2024-02-09 09:56:59.635554432 +0000 UTC m=+33.384583676" Feb 9 09:58:31.154764 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.12.6:60766.service. Feb 9 09:58:31.585902 sshd[3904]: Accepted publickey for core from 10.200.12.6 port 60766 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:31.587209 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:31.591634 systemd[1]: Started session-8.scope. Feb 9 09:58:31.592357 systemd-logind[1369]: New session 8 of user core. Feb 9 09:58:31.970746 sshd[3904]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:31.973485 systemd-logind[1369]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:58:31.973655 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:58:31.974473 systemd[1]: sshd@5-10.200.20.40:22-10.200.12.6:60766.service: Deactivated successfully. Feb 9 09:58:31.975756 systemd-logind[1369]: Removed session 8. Feb 9 09:58:37.036367 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.12.6:46200.service. Feb 9 09:58:37.459025 sshd[3917]: Accepted publickey for core from 10.200.12.6 port 46200 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:37.460644 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:37.464857 systemd[1]: Started session-9.scope. Feb 9 09:58:37.465141 systemd-logind[1369]: New session 9 of user core. Feb 9 09:58:37.820522 sshd[3917]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:37.823130 systemd[1]: sshd@6-10.200.20.40:22-10.200.12.6:46200.service: Deactivated successfully. Feb 9 09:58:37.823930 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:58:37.824562 systemd-logind[1369]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:58:37.825400 systemd-logind[1369]: Removed session 9. Feb 9 09:58:42.887537 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.12.6:46206.service. Feb 9 09:58:43.309679 sshd[3932]: Accepted publickey for core from 10.200.12.6 port 46206 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:43.311292 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:43.315411 systemd-logind[1369]: New session 10 of user core. Feb 9 09:58:43.316157 systemd[1]: Started session-10.scope. Feb 9 09:58:43.677667 sshd[3932]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:43.680904 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 09:58:43.681760 systemd[1]: sshd@7-10.200.20.40:22-10.200.12.6:46206.service: Deactivated successfully. Feb 9 09:58:43.682596 systemd-logind[1369]: Session 10 logged out. Waiting for processes to exit. Feb 9 09:58:43.683323 systemd-logind[1369]: Removed session 10. Feb 9 09:58:48.742209 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.12.6:34462.service. Feb 9 09:58:49.164677 sshd[3944]: Accepted publickey for core from 10.200.12.6 port 34462 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:49.166310 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:49.170598 systemd[1]: Started session-11.scope. Feb 9 09:58:49.170908 systemd-logind[1369]: New session 11 of user core. Feb 9 09:58:49.535843 sshd[3944]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:49.538681 systemd-logind[1369]: Session 11 logged out. Waiting for processes to exit. Feb 9 09:58:49.538930 systemd[1]: sshd@8-10.200.20.40:22-10.200.12.6:34462.service: Deactivated successfully. Feb 9 09:58:49.539680 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 09:58:49.540439 systemd-logind[1369]: Removed session 11. Feb 9 09:58:54.603683 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.12.6:34472.service. Feb 9 09:58:55.028801 sshd[3957]: Accepted publickey for core from 10.200.12.6 port 34472 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:55.030144 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:55.034462 systemd[1]: Started session-12.scope. Feb 9 09:58:55.034868 systemd-logind[1369]: New session 12 of user core. Feb 9 09:58:55.388337 sshd[3957]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:55.391289 systemd[1]: sshd@9-10.200.20.40:22-10.200.12.6:34472.service: Deactivated successfully. Feb 9 09:58:55.391494 systemd-logind[1369]: Session 12 logged out. Waiting for processes to exit. Feb 9 09:58:55.392021 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 09:58:55.392891 systemd-logind[1369]: Removed session 12. Feb 9 09:59:00.458658 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.12.6:56668.service. Feb 9 09:59:00.884120 sshd[3969]: Accepted publickey for core from 10.200.12.6 port 56668 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:00.885768 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:00.889900 systemd[1]: Started session-13.scope. Feb 9 09:59:00.890336 systemd-logind[1369]: New session 13 of user core. Feb 9 09:59:01.246565 sshd[3969]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:01.249151 systemd[1]: sshd@10-10.200.20.40:22-10.200.12.6:56668.service: Deactivated successfully. Feb 9 09:59:01.249939 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 09:59:01.250539 systemd-logind[1369]: Session 13 logged out. Waiting for processes to exit. Feb 9 09:59:01.251209 systemd-logind[1369]: Removed session 13. Feb 9 09:59:06.315142 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.12.6:56680.service. Feb 9 09:59:06.740162 sshd[3982]: Accepted publickey for core from 10.200.12.6 port 56680 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:06.741618 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:06.747032 systemd[1]: Started session-14.scope. Feb 9 09:59:06.747570 systemd-logind[1369]: New session 14 of user core. Feb 9 09:59:07.102652 sshd[3982]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:07.105560 systemd[1]: sshd@11-10.200.20.40:22-10.200.12.6:56680.service: Deactivated successfully. Feb 9 09:59:07.106296 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 09:59:07.107214 systemd-logind[1369]: Session 14 logged out. Waiting for processes to exit. Feb 9 09:59:07.108064 systemd-logind[1369]: Removed session 14. Feb 9 09:59:07.180605 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.12.6:42568.service. Feb 9 09:59:07.638301 sshd[3995]: Accepted publickey for core from 10.200.12.6 port 42568 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:07.639576 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:07.644064 systemd[1]: Started session-15.scope. Feb 9 09:59:07.644535 systemd-logind[1369]: New session 15 of user core. Feb 9 09:59:08.759575 sshd[3995]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:08.763002 systemd[1]: sshd@12-10.200.20.40:22-10.200.12.6:42568.service: Deactivated successfully. Feb 9 09:59:08.763784 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 09:59:08.764799 systemd-logind[1369]: Session 15 logged out. Waiting for processes to exit. Feb 9 09:59:08.765903 systemd-logind[1369]: Removed session 15. Feb 9 09:59:08.829911 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.12.6:42578.service. Feb 9 09:59:09.253834 sshd[4005]: Accepted publickey for core from 10.200.12.6 port 42578 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:09.255213 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:09.259924 systemd[1]: Started session-16.scope. Feb 9 09:59:09.260066 systemd-logind[1369]: New session 16 of user core. Feb 9 09:59:09.616630 sshd[4005]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:09.619242 systemd-logind[1369]: Session 16 logged out. Waiting for processes to exit. Feb 9 09:59:09.619845 systemd[1]: sshd@13-10.200.20.40:22-10.200.12.6:42578.service: Deactivated successfully. Feb 9 09:59:09.620628 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 09:59:09.621370 systemd-logind[1369]: Removed session 16. Feb 9 09:59:14.684688 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.12.6:42588.service. Feb 9 09:59:15.106560 sshd[4019]: Accepted publickey for core from 10.200.12.6 port 42588 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:15.108181 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:15.112276 systemd[1]: Started session-17.scope. Feb 9 09:59:15.112715 systemd-logind[1369]: New session 17 of user core. Feb 9 09:59:15.476242 sshd[4019]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:15.479323 systemd[1]: sshd@14-10.200.20.40:22-10.200.12.6:42588.service: Deactivated successfully. Feb 9 09:59:15.480099 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 09:59:15.481141 systemd-logind[1369]: Session 17 logged out. Waiting for processes to exit. Feb 9 09:59:15.481909 systemd-logind[1369]: Removed session 17. Feb 9 09:59:20.549104 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.12.6:47276.service. Feb 9 09:59:20.975256 sshd[4030]: Accepted publickey for core from 10.200.12.6 port 47276 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:20.977017 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:20.980806 systemd-logind[1369]: New session 18 of user core. Feb 9 09:59:20.981239 systemd[1]: Started session-18.scope. Feb 9 09:59:21.340815 sshd[4030]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:21.343573 systemd-logind[1369]: Session 18 logged out. Waiting for processes to exit. Feb 9 09:59:21.343732 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 09:59:21.344336 systemd[1]: sshd@15-10.200.20.40:22-10.200.12.6:47276.service: Deactivated successfully. Feb 9 09:59:21.345541 systemd-logind[1369]: Removed session 18. Feb 9 09:59:21.415180 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.12.6:47292.service. Feb 9 09:59:21.846531 sshd[4042]: Accepted publickey for core from 10.200.12.6 port 47292 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:21.847883 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:21.852363 systemd[1]: Started session-19.scope. Feb 9 09:59:21.852833 systemd-logind[1369]: New session 19 of user core. Feb 9 09:59:22.271646 sshd[4042]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:22.274519 systemd-logind[1369]: Session 19 logged out. Waiting for processes to exit. Feb 9 09:59:22.275280 systemd[1]: sshd@16-10.200.20.40:22-10.200.12.6:47292.service: Deactivated successfully. Feb 9 09:59:22.276034 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 09:59:22.276785 systemd-logind[1369]: Removed session 19. Feb 9 09:59:22.344102 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.12.6:47302.service. Feb 9 09:59:22.770044 sshd[4051]: Accepted publickey for core from 10.200.12.6 port 47302 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:22.771354 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:22.775814 systemd[1]: Started session-20.scope. Feb 9 09:59:22.776097 systemd-logind[1369]: New session 20 of user core. Feb 9 09:59:23.953479 sshd[4051]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:23.956841 systemd[1]: sshd@17-10.200.20.40:22-10.200.12.6:47302.service: Deactivated successfully. Feb 9 09:59:23.957606 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 09:59:23.958026 systemd-logind[1369]: Session 20 logged out. Waiting for processes to exit. Feb 9 09:59:23.959194 systemd-logind[1369]: Removed session 20. Feb 9 09:59:24.029714 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.12.6:47312.service. Feb 9 09:59:24.485604 sshd[4116]: Accepted publickey for core from 10.200.12.6 port 47312 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:24.487242 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:24.491397 systemd-logind[1369]: New session 21 of user core. Feb 9 09:59:24.492140 systemd[1]: Started session-21.scope. Feb 9 09:59:24.973603 sshd[4116]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:24.976504 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 09:59:24.976516 systemd-logind[1369]: Session 21 logged out. Waiting for processes to exit. Feb 9 09:59:24.977087 systemd[1]: sshd@18-10.200.20.40:22-10.200.12.6:47312.service: Deactivated successfully. Feb 9 09:59:24.978152 systemd-logind[1369]: Removed session 21. Feb 9 09:59:25.044850 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.12.6:47322.service. Feb 9 09:59:25.471602 sshd[4127]: Accepted publickey for core from 10.200.12.6 port 47322 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:25.472867 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:25.476795 systemd-logind[1369]: New session 22 of user core. Feb 9 09:59:25.477200 systemd[1]: Started session-22.scope. Feb 9 09:59:25.831241 sshd[4127]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:25.833953 systemd-logind[1369]: Session 22 logged out. Waiting for processes to exit. Feb 9 09:59:25.834042 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 09:59:25.834702 systemd[1]: sshd@19-10.200.20.40:22-10.200.12.6:47322.service: Deactivated successfully. Feb 9 09:59:25.835767 systemd-logind[1369]: Removed session 22. Feb 9 09:59:30.900300 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.12.6:45906.service. Feb 9 09:59:31.322677 sshd[4141]: Accepted publickey for core from 10.200.12.6 port 45906 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:31.324277 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:31.328496 systemd[1]: Started session-23.scope. Feb 9 09:59:31.329060 systemd-logind[1369]: New session 23 of user core. Feb 9 09:59:31.684692 sshd[4141]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:31.687193 systemd-logind[1369]: Session 23 logged out. Waiting for processes to exit. Feb 9 09:59:31.687353 systemd[1]: sshd@20-10.200.20.40:22-10.200.12.6:45906.service: Deactivated successfully. Feb 9 09:59:31.688080 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 09:59:31.688864 systemd-logind[1369]: Removed session 23. Feb 9 09:59:36.755094 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.12.6:45920.service. Feb 9 09:59:37.182593 sshd[4179]: Accepted publickey for core from 10.200.12.6 port 45920 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:37.183910 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:37.188086 systemd[1]: Started session-24.scope. Feb 9 09:59:37.188373 systemd-logind[1369]: New session 24 of user core. Feb 9 09:59:37.542539 sshd[4179]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:37.545404 systemd[1]: sshd@21-10.200.20.40:22-10.200.12.6:45920.service: Deactivated successfully. Feb 9 09:59:37.546189 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 09:59:37.547195 systemd-logind[1369]: Session 24 logged out. Waiting for processes to exit. Feb 9 09:59:37.548769 systemd-logind[1369]: Removed session 24. Feb 9 09:59:42.618295 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.12.6:44652.service. Feb 9 09:59:43.075920 sshd[4195]: Accepted publickey for core from 10.200.12.6 port 44652 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:43.077361 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:43.084720 systemd[1]: Started session-25.scope. Feb 9 09:59:43.085319 systemd-logind[1369]: New session 25 of user core. Feb 9 09:59:43.466945 sshd[4195]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:43.469574 systemd-logind[1369]: Session 25 logged out. Waiting for processes to exit. Feb 9 09:59:43.469740 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 09:59:43.470548 systemd[1]: sshd@22-10.200.20.40:22-10.200.12.6:44652.service: Deactivated successfully. Feb 9 09:59:43.471781 systemd-logind[1369]: Removed session 25. Feb 9 09:59:48.542532 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.12.6:37288.service. Feb 9 09:59:48.999987 sshd[4209]: Accepted publickey for core from 10.200.12.6 port 37288 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:49.001715 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:49.005704 systemd-logind[1369]: New session 26 of user core. Feb 9 09:59:49.006110 systemd[1]: Started session-26.scope. Feb 9 09:59:49.385869 sshd[4209]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:49.388550 systemd-logind[1369]: Session 26 logged out. Waiting for processes to exit. Feb 9 09:59:49.388716 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 09:59:49.389514 systemd[1]: sshd@23-10.200.20.40:22-10.200.12.6:37288.service: Deactivated successfully. Feb 9 09:59:49.390717 systemd-logind[1369]: Removed session 26. Feb 9 09:59:49.457352 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.12.6:37298.service. Feb 9 09:59:49.879335 sshd[4220]: Accepted publickey for core from 10.200.12.6 port 37298 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:49.880947 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:49.885232 systemd[1]: Started session-27.scope. Feb 9 09:59:49.886224 systemd-logind[1369]: New session 27 of user core. Feb 9 09:59:52.735410 systemd[1]: run-containerd-runc-k8s.io-fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d-runc.iAbglw.mount: Deactivated successfully. Feb 9 09:59:52.759155 env[1381]: time="2024-02-09T09:59:52.759087238Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:59:52.765084 env[1381]: time="2024-02-09T09:59:52.765045661Z" level=info msg="StopContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" with timeout 1 (s)" Feb 9 09:59:52.770087 env[1381]: time="2024-02-09T09:59:52.770046046Z" level=info msg="Stop container \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" with signal terminated" Feb 9 09:59:52.774301 env[1381]: time="2024-02-09T09:59:52.774249790Z" level=info msg="StopContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" with timeout 30 (s)" Feb 9 09:59:52.774844 env[1381]: time="2024-02-09T09:59:52.774821527Z" level=info msg="Stop container \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" with signal terminated" Feb 9 09:59:52.780712 systemd-networkd[1531]: lxc_health: Link DOWN Feb 9 09:59:52.780718 systemd-networkd[1531]: lxc_health: Lost carrier Feb 9 09:59:52.788693 systemd[1]: cri-containerd-e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303.scope: Deactivated successfully. Feb 9 09:59:52.808838 systemd[1]: cri-containerd-fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d.scope: Deactivated successfully. Feb 9 09:59:52.809146 systemd[1]: cri-containerd-fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d.scope: Consumed 6.349s CPU time. Feb 9 09:59:52.818862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303-rootfs.mount: Deactivated successfully. Feb 9 09:59:52.833210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d-rootfs.mount: Deactivated successfully. Feb 9 09:59:52.879250 env[1381]: time="2024-02-09T09:59:52.879198556Z" level=info msg="shim disconnected" id=e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303 Feb 9 09:59:52.879250 env[1381]: time="2024-02-09T09:59:52.879245384Z" level=warning msg="cleaning up after shim disconnected" id=e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303 namespace=k8s.io Feb 9 09:59:52.879250 env[1381]: time="2024-02-09T09:59:52.879256661Z" level=info msg="cleaning up dead shim" Feb 9 09:59:52.880041 env[1381]: time="2024-02-09T09:59:52.879998915Z" level=info msg="shim disconnected" id=fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d Feb 9 09:59:52.880233 env[1381]: time="2024-02-09T09:59:52.880214381Z" level=warning msg="cleaning up after shim disconnected" id=fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d namespace=k8s.io Feb 9 09:59:52.880316 env[1381]: time="2024-02-09T09:59:52.880303358Z" level=info msg="cleaning up dead shim" Feb 9 09:59:52.889630 env[1381]: time="2024-02-09T09:59:52.889575310Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4293 runtime=io.containerd.runc.v2\n" Feb 9 09:59:52.891436 env[1381]: time="2024-02-09T09:59:52.891373178Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4289 runtime=io.containerd.runc.v2\n" Feb 9 09:59:52.894580 env[1381]: time="2024-02-09T09:59:52.894410416Z" level=info msg="StopContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" returns successfully" Feb 9 09:59:52.895252 env[1381]: time="2024-02-09T09:59:52.895225091Z" level=info msg="StopPodSandbox for \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\"" Feb 9 09:59:52.895493 env[1381]: time="2024-02-09T09:59:52.895470629Z" level=info msg="Container to stop \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:52.895583 env[1381]: time="2024-02-09T09:59:52.895565966Z" level=info msg="Container to stop \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:52.895642 env[1381]: time="2024-02-09T09:59:52.895627510Z" level=info msg="Container to stop \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:52.895701 env[1381]: time="2024-02-09T09:59:52.895686535Z" level=info msg="Container to stop \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:52.895763 env[1381]: time="2024-02-09T09:59:52.895747760Z" level=info msg="Container to stop \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:52.897626 env[1381]: time="2024-02-09T09:59:52.897588218Z" level=info msg="StopContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" returns successfully" Feb 9 09:59:52.898061 env[1381]: time="2024-02-09T09:59:52.898039944Z" level=info msg="StopPodSandbox for \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\"" Feb 9 09:59:52.898192 env[1381]: time="2024-02-09T09:59:52.898171791Z" level=info msg="Container to stop \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:52.901870 systemd[1]: cri-containerd-814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b.scope: Deactivated successfully. Feb 9 09:59:52.914898 systemd[1]: cri-containerd-771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff.scope: Deactivated successfully. Feb 9 09:59:52.941691 env[1381]: time="2024-02-09T09:59:52.941648074Z" level=info msg="shim disconnected" id=814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b Feb 9 09:59:52.942341 env[1381]: time="2024-02-09T09:59:52.942315866Z" level=warning msg="cleaning up after shim disconnected" id=814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b namespace=k8s.io Feb 9 09:59:52.942839 env[1381]: time="2024-02-09T09:59:52.942817380Z" level=info msg="cleaning up dead shim" Feb 9 09:59:52.943043 env[1381]: time="2024-02-09T09:59:52.942272997Z" level=info msg="shim disconnected" id=771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff Feb 9 09:59:52.943095 env[1381]: time="2024-02-09T09:59:52.943044723Z" level=warning msg="cleaning up after shim disconnected" id=771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff namespace=k8s.io Feb 9 09:59:52.943095 env[1381]: time="2024-02-09T09:59:52.943053841Z" level=info msg="cleaning up dead shim" Feb 9 09:59:52.952872 env[1381]: time="2024-02-09T09:59:52.952816229Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4351 runtime=io.containerd.runc.v2\n" Feb 9 09:59:52.953169 env[1381]: time="2024-02-09T09:59:52.953139108Z" level=info msg="TearDown network for sandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" successfully" Feb 9 09:59:52.953169 env[1381]: time="2024-02-09T09:59:52.953167581Z" level=info msg="StopPodSandbox for \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" returns successfully" Feb 9 09:59:52.956658 env[1381]: time="2024-02-09T09:59:52.956615955Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4350 runtime=io.containerd.runc.v2\n" Feb 9 09:59:52.957135 env[1381]: time="2024-02-09T09:59:52.957108711Z" level=info msg="TearDown network for sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" successfully" Feb 9 09:59:52.957600 env[1381]: time="2024-02-09T09:59:52.957568116Z" level=info msg="StopPodSandbox for \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" returns successfully" Feb 9 09:59:53.127872 kubelet[2474]: I0209 09:59:53.127828 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm7k2\" (UniqueName: \"kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-kube-api-access-vm7k2\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.127872 kubelet[2474]: I0209 09:59:53.127876 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnlbd\" (UniqueName: \"kubernetes.io/projected/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-kube-api-access-vnlbd\") pod \"f6bc1ad8-65b4-4e39-8112-d5f6d752bffa\" (UID: \"f6bc1ad8-65b4-4e39-8112-d5f6d752bffa\") " Feb 9 09:59:53.128249 kubelet[2474]: I0209 09:59:53.127897 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-cgroup\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128249 kubelet[2474]: I0209 09:59:53.127918 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-bpf-maps\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128249 kubelet[2474]: I0209 09:59:53.127935 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-lib-modules\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128249 kubelet[2474]: I0209 09:59:53.127957 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-cilium-config-path\") pod \"f6bc1ad8-65b4-4e39-8112-d5f6d752bffa\" (UID: \"f6bc1ad8-65b4-4e39-8112-d5f6d752bffa\") " Feb 9 09:59:53.128249 kubelet[2474]: I0209 09:59:53.127979 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hubble-tls\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128249 kubelet[2474]: I0209 09:59:53.127999 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-clustermesh-secrets\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128391 kubelet[2474]: I0209 09:59:53.128016 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-net\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128391 kubelet[2474]: I0209 09:59:53.128034 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-kernel\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128391 kubelet[2474]: I0209 09:59:53.128051 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hostproc\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128391 kubelet[2474]: I0209 09:59:53.128067 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-etc-cni-netd\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128391 kubelet[2474]: I0209 09:59:53.128087 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-xtables-lock\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128391 kubelet[2474]: I0209 09:59:53.128105 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-run\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128561 kubelet[2474]: I0209 09:59:53.128126 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cni-path\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128561 kubelet[2474]: I0209 09:59:53.128145 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-config-path\") pod \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\" (UID: \"604b4494-3b22-41b2-b21f-a6a8e0a0b6c7\") " Feb 9 09:59:53.128561 kubelet[2474]: W0209 09:59:53.128342 2474 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:53.130349 kubelet[2474]: I0209 09:59:53.130308 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:53.130480 kubelet[2474]: I0209 09:59:53.130447 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130517 kubelet[2474]: I0209 09:59:53.130472 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130517 kubelet[2474]: I0209 09:59:53.130503 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130565 kubelet[2474]: I0209 09:59:53.130519 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130565 kubelet[2474]: I0209 09:59:53.130533 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130565 kubelet[2474]: I0209 09:59:53.130548 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130629 kubelet[2474]: I0209 09:59:53.130563 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130629 kubelet[2474]: I0209 09:59:53.130579 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130887 kubelet[2474]: I0209 09:59:53.130862 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.130995 kubelet[2474]: W0209 09:59:53.130968 2474 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:53.131566 kubelet[2474]: I0209 09:59:53.131545 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.132778 kubelet[2474]: I0209 09:59:53.132742 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:53.133061 kubelet[2474]: I0209 09:59:53.133040 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6bc1ad8-65b4-4e39-8112-d5f6d752bffa" (UID: "f6bc1ad8-65b4-4e39-8112-d5f6d752bffa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:53.135342 kubelet[2474]: I0209 09:59:53.135302 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.135581 kubelet[2474]: I0209 09:59:53.135561 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-kube-api-access-vm7k2" (OuterVolumeSpecName: "kube-api-access-vm7k2") pod "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" (UID: "604b4494-3b22-41b2-b21f-a6a8e0a0b6c7"). InnerVolumeSpecName "kube-api-access-vm7k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.137482 kubelet[2474]: I0209 09:59:53.137452 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-kube-api-access-vnlbd" (OuterVolumeSpecName: "kube-api-access-vnlbd") pod "f6bc1ad8-65b4-4e39-8112-d5f6d752bffa" (UID: "f6bc1ad8-65b4-4e39-8112-d5f6d752bffa"). InnerVolumeSpecName "kube-api-access-vnlbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.228714 kubelet[2474]: I0209 09:59:53.228677 2474 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.228891 kubelet[2474]: I0209 09:59:53.228881 2474 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-clustermesh-secrets\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.228960 kubelet[2474]: I0209 09:59:53.228950 2474 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-host-proc-sys-net\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229027 kubelet[2474]: I0209 09:59:53.229018 2474 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hostproc\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229111 kubelet[2474]: I0209 09:59:53.229102 2474 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-etc-cni-netd\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229180 kubelet[2474]: I0209 09:59:53.229172 2474 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-xtables-lock\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229242 kubelet[2474]: I0209 09:59:53.229234 2474 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cni-path\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229305 kubelet[2474]: I0209 09:59:53.229289 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-run\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229367 kubelet[2474]: I0209 09:59:53.229359 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-config-path\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229454 kubelet[2474]: I0209 09:59:53.229442 2474 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-vm7k2\" (UniqueName: \"kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-kube-api-access-vm7k2\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229535 kubelet[2474]: I0209 09:59:53.229526 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-cilium-cgroup\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229604 kubelet[2474]: I0209 09:59:53.229595 2474 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-bpf-maps\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229666 kubelet[2474]: I0209 09:59:53.229657 2474 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-vnlbd\" (UniqueName: \"kubernetes.io/projected/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-kube-api-access-vnlbd\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229734 kubelet[2474]: I0209 09:59:53.229725 2474 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-hubble-tls\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229796 kubelet[2474]: I0209 09:59:53.229788 2474 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7-lib-modules\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.229865 kubelet[2474]: I0209 09:59:53.229856 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa-cilium-config-path\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:53.730874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff-rootfs.mount: Deactivated successfully. Feb 9 09:59:53.730978 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff-shm.mount: Deactivated successfully. Feb 9 09:59:53.731036 systemd[1]: var-lib-kubelet-pods-f6bc1ad8\x2d65b4\x2d4e39\x2d8112\x2dd5f6d752bffa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvnlbd.mount: Deactivated successfully. Feb 9 09:59:53.731087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b-rootfs.mount: Deactivated successfully. Feb 9 09:59:53.731141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b-shm.mount: Deactivated successfully. Feb 9 09:59:53.731195 systemd[1]: var-lib-kubelet-pods-604b4494\x2d3b22\x2d41b2\x2db21f\x2da6a8e0a0b6c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvm7k2.mount: Deactivated successfully. Feb 9 09:59:53.731246 systemd[1]: var-lib-kubelet-pods-604b4494\x2d3b22\x2d41b2\x2db21f\x2da6a8e0a0b6c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:53.731294 systemd[1]: var-lib-kubelet-pods-604b4494\x2d3b22\x2d41b2\x2db21f\x2da6a8e0a0b6c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:53.867938 kubelet[2474]: I0209 09:59:53.867913 2474 scope.go:115] "RemoveContainer" containerID="fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d" Feb 9 09:59:53.869855 env[1381]: time="2024-02-09T09:59:53.869401323Z" level=info msg="RemoveContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\"" Feb 9 09:59:53.871209 systemd[1]: Removed slice kubepods-burstable-pod604b4494_3b22_41b2_b21f_a6a8e0a0b6c7.slice. Feb 9 09:59:53.871304 systemd[1]: kubepods-burstable-pod604b4494_3b22_41b2_b21f_a6a8e0a0b6c7.slice: Consumed 6.472s CPU time. Feb 9 09:59:53.875869 systemd[1]: Removed slice kubepods-besteffort-podf6bc1ad8_65b4_4e39_8112_d5f6d752bffa.slice. Feb 9 09:59:53.883230 env[1381]: time="2024-02-09T09:59:53.883089674Z" level=info msg="RemoveContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" returns successfully" Feb 9 09:59:53.883687 kubelet[2474]: I0209 09:59:53.883670 2474 scope.go:115] "RemoveContainer" containerID="b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12" Feb 9 09:59:53.885062 env[1381]: time="2024-02-09T09:59:53.884818203Z" level=info msg="RemoveContainer for \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\"" Feb 9 09:59:53.895814 env[1381]: time="2024-02-09T09:59:53.895659543Z" level=info msg="RemoveContainer for \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\" returns successfully" Feb 9 09:59:53.896035 kubelet[2474]: I0209 09:59:53.896016 2474 scope.go:115] "RemoveContainer" containerID="b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c" Feb 9 09:59:53.898259 env[1381]: time="2024-02-09T09:59:53.897964289Z" level=info msg="RemoveContainer for \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\"" Feb 9 09:59:53.908533 env[1381]: time="2024-02-09T09:59:53.908484829Z" level=info msg="RemoveContainer for \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\" returns successfully" Feb 9 09:59:53.908931 kubelet[2474]: I0209 09:59:53.908896 2474 scope.go:115] "RemoveContainer" containerID="23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c" Feb 9 09:59:53.910340 env[1381]: time="2024-02-09T09:59:53.910287580Z" level=info msg="RemoveContainer for \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\"" Feb 9 09:59:53.918134 env[1381]: time="2024-02-09T09:59:53.918097515Z" level=info msg="RemoveContainer for \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\" returns successfully" Feb 9 09:59:53.918389 kubelet[2474]: I0209 09:59:53.918372 2474 scope.go:115] "RemoveContainer" containerID="bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd" Feb 9 09:59:53.919824 env[1381]: time="2024-02-09T09:59:53.919590423Z" level=info msg="RemoveContainer for \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\"" Feb 9 09:59:53.927157 env[1381]: time="2024-02-09T09:59:53.927125586Z" level=info msg="RemoveContainer for \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\" returns successfully" Feb 9 09:59:53.927405 kubelet[2474]: I0209 09:59:53.927387 2474 scope.go:115] "RemoveContainer" containerID="fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d" Feb 9 09:59:53.929718 env[1381]: time="2024-02-09T09:59:53.929623604Z" level=error msg="ContainerStatus for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\": not found" Feb 9 09:59:53.932176 kubelet[2474]: E0209 09:59:53.932141 2474 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\": not found" containerID="fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d" Feb 9 09:59:53.932338 kubelet[2474]: I0209 09:59:53.932325 2474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d} err="failed to get container status \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\": not found" Feb 9 09:59:53.932452 kubelet[2474]: I0209 09:59:53.932441 2474 scope.go:115] "RemoveContainer" containerID="b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12" Feb 9 09:59:53.932862 env[1381]: time="2024-02-09T09:59:53.932801013Z" level=error msg="ContainerStatus for \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\": not found" Feb 9 09:59:53.933032 kubelet[2474]: E0209 09:59:53.933019 2474 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\": not found" containerID="b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12" Feb 9 09:59:53.933178 kubelet[2474]: I0209 09:59:53.933161 2474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12} err="failed to get container status \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8828632e4db34f0f58a8d6c4a3f48298fb066bbee15786eeec8843681349e12\": not found" Feb 9 09:59:53.933292 kubelet[2474]: I0209 09:59:53.933281 2474 scope.go:115] "RemoveContainer" containerID="b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c" Feb 9 09:59:53.933635 env[1381]: time="2024-02-09T09:59:53.933583778Z" level=error msg="ContainerStatus for \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\": not found" Feb 9 09:59:53.933758 kubelet[2474]: E0209 09:59:53.933739 2474 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\": not found" containerID="b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c" Feb 9 09:59:53.933814 kubelet[2474]: I0209 09:59:53.933767 2474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c} err="failed to get container status \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1f4a93a618cc71ec5355c4c3357bf8fd0d67b2411699deed2d1d3bb73d7e76c\": not found" Feb 9 09:59:53.933814 kubelet[2474]: I0209 09:59:53.933781 2474 scope.go:115] "RemoveContainer" containerID="23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c" Feb 9 09:59:53.933960 env[1381]: time="2024-02-09T09:59:53.933908137Z" level=error msg="ContainerStatus for \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\": not found" Feb 9 09:59:53.934112 kubelet[2474]: E0209 09:59:53.934084 2474 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\": not found" containerID="23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c" Feb 9 09:59:53.934219 kubelet[2474]: I0209 09:59:53.934208 2474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c} err="failed to get container status \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\": rpc error: code = NotFound desc = an error occurred when try to find container \"23e175d10d44a1203bec10067d35bc0c7501f169d7254252248cdfdf16bac51c\": not found" Feb 9 09:59:53.934355 kubelet[2474]: I0209 09:59:53.934343 2474 scope.go:115] "RemoveContainer" containerID="bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd" Feb 9 09:59:53.934715 env[1381]: time="2024-02-09T09:59:53.934662909Z" level=error msg="ContainerStatus for \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\": not found" Feb 9 09:59:53.934892 kubelet[2474]: E0209 09:59:53.934833 2474 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\": not found" containerID="bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd" Feb 9 09:59:53.934957 kubelet[2474]: I0209 09:59:53.934904 2474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd} err="failed to get container status \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcb6a7d52e8548dba49e9d72842d21739f476e94ec50894eceadf8bf61561ddd\": not found" Feb 9 09:59:53.934957 kubelet[2474]: I0209 09:59:53.934915 2474 scope.go:115] "RemoveContainer" containerID="e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303" Feb 9 09:59:53.936208 env[1381]: time="2024-02-09T09:59:53.936174293Z" level=info msg="RemoveContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\"" Feb 9 09:59:53.946591 env[1381]: time="2024-02-09T09:59:53.946547309Z" level=info msg="RemoveContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" returns successfully" Feb 9 09:59:53.946914 kubelet[2474]: I0209 09:59:53.946890 2474 scope.go:115] "RemoveContainer" containerID="e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303" Feb 9 09:59:53.947210 env[1381]: time="2024-02-09T09:59:53.947122086Z" level=error msg="ContainerStatus for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\": not found" Feb 9 09:59:53.947371 kubelet[2474]: E0209 09:59:53.947358 2474 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\": not found" containerID="e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303" Feb 9 09:59:53.947485 kubelet[2474]: I0209 09:59:53.947474 2474 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303} err="failed to get container status \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\": not found" Feb 9 09:59:54.448604 env[1381]: time="2024-02-09T09:59:54.448541170Z" level=info msg="StopContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" with timeout 1 (s)" Feb 9 09:59:54.448748 env[1381]: time="2024-02-09T09:59:54.448589758Z" level=error msg="StopContainer for \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\": not found" Feb 9 09:59:54.448748 env[1381]: time="2024-02-09T09:59:54.448568123Z" level=info msg="StopContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" with timeout 1 (s)" Feb 9 09:59:54.448748 env[1381]: time="2024-02-09T09:59:54.448666379Z" level=error msg="StopContainer for \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\": not found" Feb 9 09:59:54.449061 kubelet[2474]: E0209 09:59:54.449044 2474 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303\": not found" containerID="e8a465d87402f88390e2cb32990f9207dd5700e80aa12a495f8fbb0861b85303" Feb 9 09:59:54.449478 kubelet[2474]: E0209 09:59:54.449239 2474 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d\": not found" containerID="fe4dc5e892f544995fff6458de2dc74bf90b9943174b709f284f6193c8975d2d" Feb 9 09:59:54.449972 env[1381]: time="2024-02-09T09:59:54.449769147Z" level=info msg="StopPodSandbox for \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\"" Feb 9 09:59:54.449972 env[1381]: time="2024-02-09T09:59:54.449830651Z" level=info msg="StopPodSandbox for \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\"" Feb 9 09:59:54.449972 env[1381]: time="2024-02-09T09:59:54.449848007Z" level=info msg="TearDown network for sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" successfully" Feb 9 09:59:54.449972 env[1381]: time="2024-02-09T09:59:54.449877360Z" level=info msg="StopPodSandbox for \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" returns successfully" Feb 9 09:59:54.449972 env[1381]: time="2024-02-09T09:59:54.449888597Z" level=info msg="TearDown network for sandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" successfully" Feb 9 09:59:54.449972 env[1381]: time="2024-02-09T09:59:54.449926388Z" level=info msg="StopPodSandbox for \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" returns successfully" Feb 9 09:59:54.450154 kubelet[2474]: I0209 09:59:54.449999 2474 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=604b4494-3b22-41b2-b21f-a6a8e0a0b6c7 path="/var/lib/kubelet/pods/604b4494-3b22-41b2-b21f-a6a8e0a0b6c7/volumes" Feb 9 09:59:54.451864 kubelet[2474]: I0209 09:59:54.451836 2474 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f6bc1ad8-65b4-4e39-8112-d5f6d752bffa path="/var/lib/kubelet/pods/f6bc1ad8-65b4-4e39-8112-d5f6d752bffa/volumes" Feb 9 09:59:54.734102 sshd[4220]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:54.737122 systemd[1]: sshd@24-10.200.20.40:22-10.200.12.6:37298.service: Deactivated successfully. Feb 9 09:59:54.737659 systemd-logind[1369]: Session 27 logged out. Waiting for processes to exit. Feb 9 09:59:54.737854 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 09:59:54.738036 systemd[1]: session-27.scope: Consumed 1.959s CPU time. Feb 9 09:59:54.738970 systemd-logind[1369]: Removed session 27. Feb 9 09:59:54.806275 systemd[1]: Started sshd@25-10.200.20.40:22-10.200.12.6:37314.service. Feb 9 09:59:55.228659 sshd[4383]: Accepted publickey for core from 10.200.12.6 port 37314 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:55.230302 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:55.234657 systemd[1]: Started session-28.scope. Feb 9 09:59:55.235514 systemd-logind[1369]: New session 28 of user core. Feb 9 09:59:56.201072 kubelet[2474]: I0209 09:59:56.201039 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:56.201496 kubelet[2474]: E0209 09:59:56.201480 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" containerName="cilium-agent" Feb 9 09:59:56.201590 kubelet[2474]: E0209 09:59:56.201571 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" containerName="mount-cgroup" Feb 9 09:59:56.201645 kubelet[2474]: E0209 09:59:56.201636 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" containerName="apply-sysctl-overwrites" Feb 9 09:59:56.201700 kubelet[2474]: E0209 09:59:56.201692 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" containerName="clean-cilium-state" Feb 9 09:59:56.201758 kubelet[2474]: E0209 09:59:56.201749 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6bc1ad8-65b4-4e39-8112-d5f6d752bffa" containerName="cilium-operator" Feb 9 09:59:56.201814 kubelet[2474]: E0209 09:59:56.201805 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" containerName="mount-bpf-fs" Feb 9 09:59:56.201894 kubelet[2474]: I0209 09:59:56.201883 2474 memory_manager.go:346] "RemoveStaleState removing state" podUID="f6bc1ad8-65b4-4e39-8112-d5f6d752bffa" containerName="cilium-operator" Feb 9 09:59:56.201956 kubelet[2474]: I0209 09:59:56.201947 2474 memory_manager.go:346] "RemoveStaleState removing state" podUID="604b4494-3b22-41b2-b21f-a6a8e0a0b6c7" containerName="cilium-agent" Feb 9 09:59:56.206686 systemd[1]: Created slice kubepods-burstable-pod1a665b0d_ca0b_40c2_b557_b00a7bd0debe.slice. Feb 9 09:59:56.235516 sshd[4383]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:56.239213 systemd[1]: sshd@25-10.200.20.40:22-10.200.12.6:37314.service: Deactivated successfully. Feb 9 09:59:56.239963 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 09:59:56.240361 systemd-logind[1369]: Session 28 logged out. Waiting for processes to exit. Feb 9 09:59:56.241873 systemd-logind[1369]: Removed session 28. Feb 9 09:59:56.245914 kubelet[2474]: I0209 09:59:56.245888 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-run\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246099 kubelet[2474]: I0209 09:59:56.246084 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-bpf-maps\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246200 kubelet[2474]: I0209 09:59:56.246189 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-etc-cni-netd\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246286 kubelet[2474]: I0209 09:59:56.246275 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-clustermesh-secrets\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246446 kubelet[2474]: I0209 09:59:56.246390 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-xtables-lock\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246446 kubelet[2474]: I0209 09:59:56.246442 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-net\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246527 kubelet[2474]: I0209 09:59:56.246469 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cni-path\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246527 kubelet[2474]: I0209 09:59:56.246488 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hostproc\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246527 kubelet[2474]: I0209 09:59:56.246507 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-ipsec-secrets\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246527 kubelet[2474]: I0209 09:59:56.246528 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-kernel\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246618 kubelet[2474]: I0209 09:59:56.246549 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-cgroup\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246618 kubelet[2474]: I0209 09:59:56.246570 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-lib-modules\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246618 kubelet[2474]: I0209 09:59:56.246590 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-config-path\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246618 kubelet[2474]: I0209 09:59:56.246608 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hubble-tls\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.246708 kubelet[2474]: I0209 09:59:56.246626 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sct9k\" (UniqueName: \"kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-kube-api-access-sct9k\") pod \"cilium-ncxkr\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " pod="kube-system/cilium-ncxkr" Feb 9 09:59:56.308355 systemd[1]: Started sshd@26-10.200.20.40:22-10.200.12.6:37318.service. Feb 9 09:59:56.512088 env[1381]: time="2024-02-09T09:59:56.511979497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ncxkr,Uid:1a665b0d-ca0b-40c2-b557-b00a7bd0debe,Namespace:kube-system,Attempt:0,}" Feb 9 09:59:56.515631 kubelet[2474]: E0209 09:59:56.515453 2474 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:59:56.550486 env[1381]: time="2024-02-09T09:59:56.550268302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:56.550486 env[1381]: time="2024-02-09T09:59:56.550315451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:56.550486 env[1381]: time="2024-02-09T09:59:56.550328927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:56.550792 env[1381]: time="2024-02-09T09:59:56.550724271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422 pid=4406 runtime=io.containerd.runc.v2 Feb 9 09:59:56.560900 systemd[1]: Started cri-containerd-0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422.scope. Feb 9 09:59:56.591834 env[1381]: time="2024-02-09T09:59:56.591788801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ncxkr,Uid:1a665b0d-ca0b-40c2-b557-b00a7bd0debe,Namespace:kube-system,Attempt:0,} returns sandbox id \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\"" Feb 9 09:59:56.595821 env[1381]: time="2024-02-09T09:59:56.595780470Z" level=info msg="CreateContainer within sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:59:56.629668 env[1381]: time="2024-02-09T09:59:56.629577768Z" level=info msg="CreateContainer within sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\"" Feb 9 09:59:56.631733 env[1381]: time="2024-02-09T09:59:56.630280278Z" level=info msg="StartContainer for \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\"" Feb 9 09:59:56.645557 systemd[1]: Started cri-containerd-becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799.scope. Feb 9 09:59:56.656291 systemd[1]: cri-containerd-becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799.scope: Deactivated successfully. Feb 9 09:59:56.714465 env[1381]: time="2024-02-09T09:59:56.714398374Z" level=info msg="shim disconnected" id=becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799 Feb 9 09:59:56.714748 env[1381]: time="2024-02-09T09:59:56.714720376Z" level=warning msg="cleaning up after shim disconnected" id=becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799 namespace=k8s.io Feb 9 09:59:56.714845 env[1381]: time="2024-02-09T09:59:56.714829909Z" level=info msg="cleaning up dead shim" Feb 9 09:59:56.722163 env[1381]: time="2024-02-09T09:59:56.722124294Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4465 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:59:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:59:56.722593 env[1381]: time="2024-02-09T09:59:56.722502522Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed" Feb 9 09:59:56.723529 env[1381]: time="2024-02-09T09:59:56.722856556Z" level=error msg="Failed to pipe stderr of container \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\"" error="reading from a closed fifo" Feb 9 09:59:56.723590 env[1381]: time="2024-02-09T09:59:56.723468847Z" level=error msg="Failed to pipe stdout of container \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\"" error="reading from a closed fifo" Feb 9 09:59:56.728727 env[1381]: time="2024-02-09T09:59:56.728670462Z" level=error msg="StartContainer for \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:59:56.729042 kubelet[2474]: E0209 09:59:56.729014 2474 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799" Feb 9 09:59:56.729155 kubelet[2474]: E0209 09:59:56.729136 2474 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:59:56.729155 kubelet[2474]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:59:56.729155 kubelet[2474]: rm /hostbin/cilium-mount Feb 9 09:59:56.729155 kubelet[2474]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sct9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ncxkr_kube-system(1a665b0d-ca0b-40c2-b557-b00a7bd0debe): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:59:56.729290 kubelet[2474]: E0209 09:59:56.729172 2474 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ncxkr" podUID=1a665b0d-ca0b-40c2-b557-b00a7bd0debe Feb 9 09:59:56.733741 sshd[4393]: Accepted publickey for core from 10.200.12.6 port 37318 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:56.735029 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:56.738719 systemd-logind[1369]: New session 29 of user core. Feb 9 09:59:56.739054 systemd[1]: Started session-29.scope. Feb 9 09:59:56.890755 env[1381]: time="2024-02-09T09:59:56.890717600Z" level=info msg="CreateContainer within sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 09:59:56.926189 env[1381]: time="2024-02-09T09:59:56.926144742Z" level=info msg="CreateContainer within sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\"" Feb 9 09:59:56.927080 env[1381]: time="2024-02-09T09:59:56.927048762Z" level=info msg="StartContainer for \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\"" Feb 9 09:59:56.942645 systemd[1]: Started cri-containerd-ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60.scope. Feb 9 09:59:56.953855 systemd[1]: cri-containerd-ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60.scope: Deactivated successfully. Feb 9 09:59:56.985366 env[1381]: time="2024-02-09T09:59:56.985305310Z" level=info msg="shim disconnected" id=ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60 Feb 9 09:59:56.985366 env[1381]: time="2024-02-09T09:59:56.985358737Z" level=warning msg="cleaning up after shim disconnected" id=ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60 namespace=k8s.io Feb 9 09:59:56.985366 env[1381]: time="2024-02-09T09:59:56.985368574Z" level=info msg="cleaning up dead shim" Feb 9 09:59:56.995276 env[1381]: time="2024-02-09T09:59:56.995223697Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4510 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:59:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:59:56.995550 env[1381]: time="2024-02-09T09:59:56.995487993Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed" Feb 9 09:59:56.996507 env[1381]: time="2024-02-09T09:59:56.996466875Z" level=error msg="Failed to pipe stdout of container \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\"" error="reading from a closed fifo" Feb 9 09:59:56.996802 env[1381]: time="2024-02-09T09:59:56.996768441Z" level=error msg="Failed to pipe stderr of container \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\"" error="reading from a closed fifo" Feb 9 09:59:57.001634 env[1381]: time="2024-02-09T09:59:57.001581272Z" level=error msg="StartContainer for \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:59:57.002338 kubelet[2474]: E0209 09:59:57.001868 2474 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60" Feb 9 09:59:57.002338 kubelet[2474]: E0209 09:59:57.001965 2474 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:59:57.002338 kubelet[2474]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:59:57.002338 kubelet[2474]: rm /hostbin/cilium-mount Feb 9 09:59:57.002526 kubelet[2474]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sct9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ncxkr_kube-system(1a665b0d-ca0b-40c2-b557-b00a7bd0debe): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:59:57.002608 kubelet[2474]: E0209 09:59:57.001997 2474 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ncxkr" podUID=1a665b0d-ca0b-40c2-b557-b00a7bd0debe Feb 9 09:59:57.118541 sshd[4393]: pam_unix(sshd:session): session closed for user core Feb 9 09:59:57.121291 systemd-logind[1369]: Session 29 logged out. Waiting for processes to exit. Feb 9 09:59:57.122063 systemd[1]: sshd@26-10.200.20.40:22-10.200.12.6:37318.service: Deactivated successfully. Feb 9 09:59:57.122842 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 09:59:57.123530 systemd-logind[1369]: Removed session 29. Feb 9 09:59:57.189481 systemd[1]: Started sshd@27-10.200.20.40:22-10.200.12.6:36520.service. Feb 9 09:59:57.615713 sshd[4525]: Accepted publickey for core from 10.200.12.6 port 36520 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:59:57.617337 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:59:57.621066 systemd-logind[1369]: New session 30 of user core. Feb 9 09:59:57.621774 systemd[1]: Started session-30.scope. Feb 9 09:59:57.881092 kubelet[2474]: I0209 09:59:57.881003 2474 scope.go:115] "RemoveContainer" containerID="becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799" Feb 9 09:59:57.882458 kubelet[2474]: I0209 09:59:57.881515 2474 scope.go:115] "RemoveContainer" containerID="becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799" Feb 9 09:59:57.887884 env[1381]: time="2024-02-09T09:59:57.887615489Z" level=info msg="RemoveContainer for \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\"" Feb 9 09:59:57.888583 env[1381]: time="2024-02-09T09:59:57.888551623Z" level=info msg="RemoveContainer for \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\"" Feb 9 09:59:57.888692 env[1381]: time="2024-02-09T09:59:57.888648920Z" level=error msg="RemoveContainer for \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\" failed" error="failed to set removing state for container \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\": container is already in removing state" Feb 9 09:59:57.888831 kubelet[2474]: E0209 09:59:57.888809 2474 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\": container is already in removing state" containerID="becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799" Feb 9 09:59:57.888889 kubelet[2474]: E0209 09:59:57.888849 2474 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799": container is already in removing state; Skipping pod "cilium-ncxkr_kube-system(1a665b0d-ca0b-40c2-b557-b00a7bd0debe)" Feb 9 09:59:57.889152 kubelet[2474]: E0209 09:59:57.889135 2474 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-ncxkr_kube-system(1a665b0d-ca0b-40c2-b557-b00a7bd0debe)\"" pod="kube-system/cilium-ncxkr" podUID=1a665b0d-ca0b-40c2-b557-b00a7bd0debe Feb 9 09:59:57.900955 env[1381]: time="2024-02-09T09:59:57.900896802Z" level=info msg="RemoveContainer for \"becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799\" returns successfully" Feb 9 09:59:58.884316 env[1381]: time="2024-02-09T09:59:58.884134806Z" level=info msg="StopPodSandbox for \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\"" Feb 9 09:59:58.884316 env[1381]: time="2024-02-09T09:59:58.884191032Z" level=info msg="Container to stop \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:58.887321 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422-shm.mount: Deactivated successfully. Feb 9 09:59:58.897729 systemd[1]: cri-containerd-0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422.scope: Deactivated successfully. Feb 9 09:59:58.918800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422-rootfs.mount: Deactivated successfully. Feb 9 09:59:58.939295 env[1381]: time="2024-02-09T09:59:58.939210404Z" level=info msg="shim disconnected" id=0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422 Feb 9 09:59:58.939295 env[1381]: time="2024-02-09T09:59:58.939271069Z" level=warning msg="cleaning up after shim disconnected" id=0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422 namespace=k8s.io Feb 9 09:59:58.939295 env[1381]: time="2024-02-09T09:59:58.939282346Z" level=info msg="cleaning up dead shim" Feb 9 09:59:58.946517 env[1381]: time="2024-02-09T09:59:58.946469024Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4552 runtime=io.containerd.runc.v2\n" Feb 9 09:59:58.946792 env[1381]: time="2024-02-09T09:59:58.946764273Z" level=info msg="TearDown network for sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" successfully" Feb 9 09:59:58.946834 env[1381]: time="2024-02-09T09:59:58.946789467Z" level=info msg="StopPodSandbox for \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" returns successfully" Feb 9 09:59:58.965399 kubelet[2474]: I0209 09:59:58.965357 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-cgroup\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.965741 kubelet[2474]: I0209 09:59:58.965442 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:58.965741 kubelet[2474]: I0209 09:59:58.965502 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-net\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.965741 kubelet[2474]: I0209 09:59:58.965519 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:58.965741 kubelet[2474]: I0209 09:59:58.965544 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hostproc\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.965741 kubelet[2474]: I0209 09:59:58.965555 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:58.967261 kubelet[2474]: I0209 09:59:58.965598 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-ipsec-secrets\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967261 kubelet[2474]: I0209 09:59:58.965619 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hubble-tls\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967261 kubelet[2474]: I0209 09:59:58.965637 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-bpf-maps\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967261 kubelet[2474]: I0209 09:59:58.965658 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-clustermesh-secrets\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967261 kubelet[2474]: I0209 09:59:58.965674 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-etc-cni-netd\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967261 kubelet[2474]: I0209 09:59:58.965693 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sct9k\" (UniqueName: \"kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-kube-api-access-sct9k\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967439 kubelet[2474]: I0209 09:59:58.965709 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cni-path\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967439 kubelet[2474]: I0209 09:59:58.965726 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-lib-modules\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:58.967439 kubelet[2474]: I0209 09:59:58.965757 2474 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hostproc\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:58.967439 kubelet[2474]: I0209 09:59:58.965772 2474 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-net\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:58.967439 kubelet[2474]: I0209 09:59:58.965783 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-cgroup\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:58.967439 kubelet[2474]: I0209 09:59:58.965797 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:58.967668 kubelet[2474]: I0209 09:59:58.966449 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:58.967668 kubelet[2474]: I0209 09:59:58.966532 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:58.967668 kubelet[2474]: I0209 09:59:58.966564 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:58.969630 systemd[1]: var-lib-kubelet-pods-1a665b0d\x2dca0b\x2d40c2\x2db557\x2db00a7bd0debe-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:58.970763 kubelet[2474]: I0209 09:59:58.970740 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:58.974449 systemd[1]: var-lib-kubelet-pods-1a665b0d\x2dca0b\x2d40c2\x2db557\x2db00a7bd0debe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:58.976117 systemd[1]: var-lib-kubelet-pods-1a665b0d\x2dca0b\x2d40c2\x2db557\x2db00a7bd0debe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:58.976722 kubelet[2474]: I0209 09:59:58.976682 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:58.977078 kubelet[2474]: I0209 09:59:58.977053 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:58.978643 kubelet[2474]: I0209 09:59:58.978618 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-kube-api-access-sct9k" (OuterVolumeSpecName: "kube-api-access-sct9k") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "kube-api-access-sct9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:59.066685 kubelet[2474]: I0209 09:59:59.066646 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.066854 kubelet[2474]: I0209 09:59:59.066665 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-kernel\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:59.066972 kubelet[2474]: I0209 09:59:59.066962 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-config-path\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:59.067060 kubelet[2474]: I0209 09:59:59.067051 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-run\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:59.067140 kubelet[2474]: I0209 09:59:59.067131 2474 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-xtables-lock\") pod \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\" (UID: \"1a665b0d-ca0b-40c2-b557-b00a7bd0debe\") " Feb 9 09:59:59.067217 kubelet[2474]: W0209 09:59:59.067180 2474 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1a665b0d-ca0b-40c2-b557-b00a7bd0debe/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:59.067305 kubelet[2474]: I0209 09:59:59.067295 2474 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-hubble-tls\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067374 kubelet[2474]: I0209 09:59:59.067365 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067452 kubelet[2474]: I0209 09:59:59.067442 2474 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-bpf-maps\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067522 kubelet[2474]: I0209 09:59:59.067513 2474 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-clustermesh-secrets\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067592 kubelet[2474]: I0209 09:59:59.067583 2474 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-etc-cni-netd\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067657 kubelet[2474]: I0209 09:59:59.067648 2474 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-sct9k\" (UniqueName: \"kubernetes.io/projected/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-kube-api-access-sct9k\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067728 kubelet[2474]: I0209 09:59:59.067718 2474 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cni-path\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067797 kubelet[2474]: I0209 09:59:59.067789 2474 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-lib-modules\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067865 kubelet[2474]: I0209 09:59:59.067857 2474 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.067944 kubelet[2474]: I0209 09:59:59.067932 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.068031 kubelet[2474]: I0209 09:59:59.068019 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.068982 kubelet[2474]: I0209 09:59:59.068953 2474 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a665b0d-ca0b-40c2-b557-b00a7bd0debe" (UID: "1a665b0d-ca0b-40c2-b557-b00a7bd0debe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:59.168664 kubelet[2474]: I0209 09:59:59.168554 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-run\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.168664 kubelet[2474]: I0209 09:59:59.168585 2474 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-xtables-lock\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.168664 kubelet[2474]: I0209 09:59:59.168598 2474 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a665b0d-ca0b-40c2-b557-b00a7bd0debe-cilium-config-path\") on node \"ci-3510.3.2-a-8b452ef1bd\" DevicePath \"\"" Feb 9 09:59:59.819065 kubelet[2474]: W0209 09:59:59.819011 2474 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a665b0d_ca0b_40c2_b557_b00a7bd0debe.slice/cri-containerd-becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799.scope WatchSource:0}: container "becb4a484738922bdab3afb10710224732fd24639c50dc771b811f1f78822799" in namespace "k8s.io": not found Feb 9 09:59:59.887025 kubelet[2474]: I0209 09:59:59.887001 2474 scope.go:115] "RemoveContainer" containerID="ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60" Feb 9 09:59:59.887179 systemd[1]: var-lib-kubelet-pods-1a665b0d\x2dca0b\x2d40c2\x2db557\x2db00a7bd0debe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsct9k.mount: Deactivated successfully. Feb 9 09:59:59.888615 env[1381]: time="2024-02-09T09:59:59.888571900Z" level=info msg="RemoveContainer for \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\"" Feb 9 09:59:59.891146 systemd[1]: Removed slice kubepods-burstable-pod1a665b0d_ca0b_40c2_b557_b00a7bd0debe.slice. Feb 9 09:59:59.899032 env[1381]: time="2024-02-09T09:59:59.898989620Z" level=info msg="RemoveContainer for \"ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60\" returns successfully" Feb 9 09:59:59.932370 kubelet[2474]: I0209 09:59:59.932332 2474 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:59.932606 kubelet[2474]: E0209 09:59:59.932590 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a665b0d-ca0b-40c2-b557-b00a7bd0debe" containerName="mount-cgroup" Feb 9 09:59:59.932688 kubelet[2474]: E0209 09:59:59.932679 2474 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a665b0d-ca0b-40c2-b557-b00a7bd0debe" containerName="mount-cgroup" Feb 9 09:59:59.932777 kubelet[2474]: I0209 09:59:59.932767 2474 memory_manager.go:346] "RemoveStaleState removing state" podUID="1a665b0d-ca0b-40c2-b557-b00a7bd0debe" containerName="mount-cgroup" Feb 9 09:59:59.932839 kubelet[2474]: I0209 09:59:59.932831 2474 memory_manager.go:346] "RemoveStaleState removing state" podUID="1a665b0d-ca0b-40c2-b557-b00a7bd0debe" containerName="mount-cgroup" Feb 9 09:59:59.937446 systemd[1]: Created slice kubepods-burstable-pod757dd260_d57c_47ae_a542_5e3057e936cb.slice. Feb 9 09:59:59.973655 kubelet[2474]: I0209 09:59:59.973085 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-lib-modules\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.973655 kubelet[2474]: I0209 09:59:59.973147 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/757dd260-d57c-47ae-a542-5e3057e936cb-hubble-tls\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.973655 kubelet[2474]: I0209 09:59:59.973168 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-hostproc\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.973655 kubelet[2474]: I0209 09:59:59.973221 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-bpf-maps\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.973655 kubelet[2474]: I0209 09:59:59.973250 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-host-proc-sys-net\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.973655 kubelet[2474]: I0209 09:59:59.973304 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-cilium-run\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974120 kubelet[2474]: I0209 09:59:59.973324 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-cilium-cgroup\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974120 kubelet[2474]: I0209 09:59:59.973343 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-etc-cni-netd\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974120 kubelet[2474]: I0209 09:59:59.973404 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjr6\" (UniqueName: \"kubernetes.io/projected/757dd260-d57c-47ae-a542-5e3057e936cb-kube-api-access-8hjr6\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974120 kubelet[2474]: I0209 09:59:59.973455 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-host-proc-sys-kernel\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974120 kubelet[2474]: I0209 09:59:59.973476 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-cni-path\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974120 kubelet[2474]: I0209 09:59:59.973536 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/757dd260-d57c-47ae-a542-5e3057e936cb-xtables-lock\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974259 kubelet[2474]: I0209 09:59:59.973560 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/757dd260-d57c-47ae-a542-5e3057e936cb-cilium-ipsec-secrets\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974259 kubelet[2474]: I0209 09:59:59.973579 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/757dd260-d57c-47ae-a542-5e3057e936cb-clustermesh-secrets\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 09:59:59.974259 kubelet[2474]: I0209 09:59:59.973640 2474 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/757dd260-d57c-47ae-a542-5e3057e936cb-cilium-config-path\") pod \"cilium-qtcp6\" (UID: \"757dd260-d57c-47ae-a542-5e3057e936cb\") " pod="kube-system/cilium-qtcp6" Feb 9 10:00:00.240958 env[1381]: time="2024-02-09T10:00:00.240908475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qtcp6,Uid:757dd260-d57c-47ae-a542-5e3057e936cb,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:00.280956 env[1381]: time="2024-02-09T10:00:00.280890425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:00.281234 env[1381]: time="2024-02-09T10:00:00.280929216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:00.281234 env[1381]: time="2024-02-09T10:00:00.280939653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:00.281234 env[1381]: time="2024-02-09T10:00:00.281162600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c pid=4578 runtime=io.containerd.runc.v2 Feb 9 10:00:00.291807 systemd[1]: Started cri-containerd-3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c.scope. Feb 9 10:00:00.315943 env[1381]: time="2024-02-09T10:00:00.315899830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qtcp6,Uid:757dd260-d57c-47ae-a542-5e3057e936cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\"" Feb 9 10:00:00.318804 env[1381]: time="2024-02-09T10:00:00.318770071Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:00:00.357246 env[1381]: time="2024-02-09T10:00:00.357195029Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d\"" Feb 9 10:00:00.358155 env[1381]: time="2024-02-09T10:00:00.358128609Z" level=info msg="StartContainer for \"8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d\"" Feb 9 10:00:00.372257 systemd[1]: Started cri-containerd-8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d.scope. Feb 9 10:00:00.405786 env[1381]: time="2024-02-09T10:00:00.405731437Z" level=info msg="StartContainer for \"8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d\" returns successfully" Feb 9 10:00:00.408740 systemd[1]: cri-containerd-8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d.scope: Deactivated successfully. Feb 9 10:00:00.446319 env[1381]: time="2024-02-09T10:00:00.446264856Z" level=info msg="shim disconnected" id=8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d Feb 9 10:00:00.447886 env[1381]: time="2024-02-09T10:00:00.446314165Z" level=warning msg="cleaning up after shim disconnected" id=8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d namespace=k8s.io Feb 9 10:00:00.447886 env[1381]: time="2024-02-09T10:00:00.447881994Z" level=info msg="cleaning up dead shim" Feb 9 10:00:00.450061 kubelet[2474]: I0209 10:00:00.450027 2474 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1a665b0d-ca0b-40c2-b557-b00a7bd0debe path="/var/lib/kubelet/pods/1a665b0d-ca0b-40c2-b557-b00a7bd0debe/volumes" Feb 9 10:00:00.456407 env[1381]: time="2024-02-09T10:00:00.456352272Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4662 runtime=io.containerd.runc.v2\n" Feb 9 10:00:00.702107 kubelet[2474]: I0209 10:00:00.702071 2474 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-8b452ef1bd" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:00:00.702018686 +0000 UTC m=+214.451047890 LastTransitionTime:2024-02-09 10:00:00.702018686 +0000 UTC m=+214.451047890 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:00:00.892713 env[1381]: time="2024-02-09T10:00:00.892670623Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:00:00.936721 env[1381]: time="2024-02-09T10:00:00.936655666Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929\"" Feb 9 10:00:00.937487 env[1381]: time="2024-02-09T10:00:00.937452158Z" level=info msg="StartContainer for \"6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929\"" Feb 9 10:00:00.956634 systemd[1]: Started cri-containerd-6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929.scope. Feb 9 10:00:00.989780 env[1381]: time="2024-02-09T10:00:00.989734800Z" level=info msg="StartContainer for \"6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929\" returns successfully" Feb 9 10:00:00.994085 systemd[1]: cri-containerd-6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929.scope: Deactivated successfully. Feb 9 10:00:01.027068 env[1381]: time="2024-02-09T10:00:01.027012712Z" level=info msg="shim disconnected" id=6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929 Feb 9 10:00:01.027068 env[1381]: time="2024-02-09T10:00:01.027064340Z" level=warning msg="cleaning up after shim disconnected" id=6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929 namespace=k8s.io Feb 9 10:00:01.027068 env[1381]: time="2024-02-09T10:00:01.027075777Z" level=info msg="cleaning up dead shim" Feb 9 10:00:01.034947 env[1381]: time="2024-02-09T10:00:01.034898900Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4727 runtime=io.containerd.runc.v2\n" Feb 9 10:00:01.516565 kubelet[2474]: E0209 10:00:01.516540 2474 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:00:01.887398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929-rootfs.mount: Deactivated successfully. Feb 9 10:00:01.898552 env[1381]: time="2024-02-09T10:00:01.898509872Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:00:01.933351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326590335.mount: Deactivated successfully. Feb 9 10:00:01.944271 env[1381]: time="2024-02-09T10:00:01.944214102Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509\"" Feb 9 10:00:01.944944 env[1381]: time="2024-02-09T10:00:01.944908859Z" level=info msg="StartContainer for \"a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509\"" Feb 9 10:00:01.960692 systemd[1]: Started cri-containerd-a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509.scope. Feb 9 10:00:01.996406 systemd[1]: cri-containerd-a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509.scope: Deactivated successfully. Feb 9 10:00:02.001625 env[1381]: time="2024-02-09T10:00:02.001559399Z" level=info msg="StartContainer for \"a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509\" returns successfully" Feb 9 10:00:02.043601 env[1381]: time="2024-02-09T10:00:02.043555165Z" level=info msg="shim disconnected" id=a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509 Feb 9 10:00:02.043921 env[1381]: time="2024-02-09T10:00:02.043902604Z" level=warning msg="cleaning up after shim disconnected" id=a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509 namespace=k8s.io Feb 9 10:00:02.044024 env[1381]: time="2024-02-09T10:00:02.044011099Z" level=info msg="cleaning up dead shim" Feb 9 10:00:02.051558 env[1381]: time="2024-02-09T10:00:02.051516668Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4787 runtime=io.containerd.runc.v2\n" Feb 9 10:00:02.902093 env[1381]: time="2024-02-09T10:00:02.902052103Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:00:02.931461 kubelet[2474]: W0209 10:00:02.930828 2474 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a665b0d_ca0b_40c2_b557_b00a7bd0debe.slice/cri-containerd-ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60.scope WatchSource:0}: container "ff898b5a0ff84936ac7ed23cd4d8a72bb8987ddd77b93cc0b0851360745c5c60" in namespace "k8s.io": not found Feb 9 10:00:02.947521 env[1381]: time="2024-02-09T10:00:02.947468151Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537\"" Feb 9 10:00:02.948499 env[1381]: time="2024-02-09T10:00:02.948406053Z" level=info msg="StartContainer for \"9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537\"" Feb 9 10:00:02.968775 systemd[1]: Started cri-containerd-9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537.scope. Feb 9 10:00:02.997929 systemd[1]: cri-containerd-9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537.scope: Deactivated successfully. Feb 9 10:00:03.003653 env[1381]: time="2024-02-09T10:00:03.003406148Z" level=info msg="StartContainer for \"9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537\" returns successfully" Feb 9 10:00:03.037042 env[1381]: time="2024-02-09T10:00:03.036996005Z" level=info msg="shim disconnected" id=9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537 Feb 9 10:00:03.037380 env[1381]: time="2024-02-09T10:00:03.037361160Z" level=warning msg="cleaning up after shim disconnected" id=9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537 namespace=k8s.io Feb 9 10:00:03.037477 env[1381]: time="2024-02-09T10:00:03.037462856Z" level=info msg="cleaning up dead shim" Feb 9 10:00:03.044484 env[1381]: time="2024-02-09T10:00:03.044446838Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4843 runtime=io.containerd.runc.v2\n" Feb 9 10:00:03.887491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537-rootfs.mount: Deactivated successfully. Feb 9 10:00:03.905153 env[1381]: time="2024-02-09T10:00:03.905102801Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:00:03.957538 env[1381]: time="2024-02-09T10:00:03.957492314Z" level=info msg="CreateContainer within sandbox \"3c8c7e642690387334c6ae993bcbde553446b634dcdbe4085aa3756c00380f7c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f\"" Feb 9 10:00:03.959764 env[1381]: time="2024-02-09T10:00:03.958170322Z" level=info msg="StartContainer for \"a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f\"" Feb 9 10:00:03.978984 systemd[1]: Started cri-containerd-a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f.scope. Feb 9 10:00:04.011277 env[1381]: time="2024-02-09T10:00:04.011222152Z" level=info msg="StartContainer for \"a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f\" returns successfully" Feb 9 10:00:04.530445 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:00:04.931289 kubelet[2474]: I0209 10:00:04.931245 2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qtcp6" podStartSLOduration=5.931208568 pod.CreationTimestamp="2024-02-09 09:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:04.931109758 +0000 UTC m=+218.680138962" watchObservedRunningTime="2024-02-09 10:00:04.931208568 +0000 UTC m=+218.680237772" Feb 9 10:00:06.042876 kubelet[2474]: W0209 10:00:06.042832 2474 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757dd260_d57c_47ae_a542_5e3057e936cb.slice/cri-containerd-8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d.scope WatchSource:0}: task 8036f072ee04c621fe3d0bfc55be24c02fc67be7a668f674411cde0b7e36b32d not found: not found Feb 9 10:00:06.160211 systemd[1]: run-containerd-runc-k8s.io-a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f-runc.2cDsKw.mount: Deactivated successfully. Feb 9 10:00:07.136814 systemd-networkd[1531]: lxc_health: Link UP Feb 9 10:00:07.152840 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:00:07.152573 systemd-networkd[1531]: lxc_health: Gained carrier Feb 9 10:00:08.312890 systemd[1]: run-containerd-runc-k8s.io-a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f-runc.oYUIzF.mount: Deactivated successfully. Feb 9 10:00:08.390580 systemd-networkd[1531]: lxc_health: Gained IPv6LL Feb 9 10:00:09.152711 kubelet[2474]: W0209 10:00:09.152669 2474 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757dd260_d57c_47ae_a542_5e3057e936cb.slice/cri-containerd-6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929.scope WatchSource:0}: task 6a2c213c7b6caf6edabdc1256df49e7f4b1b5d6b6235348aa3ad62980a936929 not found: not found Feb 9 10:00:10.468095 systemd[1]: run-containerd-runc-k8s.io-a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f-runc.V4XOHc.mount: Deactivated successfully. Feb 9 10:00:12.259731 kubelet[2474]: W0209 10:00:12.259696 2474 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757dd260_d57c_47ae_a542_5e3057e936cb.slice/cri-containerd-a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509.scope WatchSource:0}: task a00a8949f378e41f006b1513b37e9f69459fcf053a7ee54525345e45dae71509 not found: not found Feb 9 10:00:12.602591 systemd[1]: run-containerd-runc-k8s.io-a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f-runc.kfFHtS.mount: Deactivated successfully. Feb 9 10:00:14.733494 systemd[1]: run-containerd-runc-k8s.io-a12cab1806073fdcabd81e79efbdf523141d5383bd154ef1f6e607a8a2fa5d9f-runc.1TKF2l.mount: Deactivated successfully. Feb 9 10:00:14.853055 sshd[4525]: pam_unix(sshd:session): session closed for user core Feb 9 10:00:14.855547 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 10:00:14.856262 systemd[1]: sshd@27-10.200.20.40:22-10.200.12.6:36520.service: Deactivated successfully. Feb 9 10:00:14.856504 systemd-logind[1369]: Session 30 logged out. Waiting for processes to exit. Feb 9 10:00:14.857701 systemd-logind[1369]: Removed session 30. Feb 9 10:00:15.366894 kubelet[2474]: W0209 10:00:15.366853 2474 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757dd260_d57c_47ae_a542_5e3057e936cb.slice/cri-containerd-9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537.scope WatchSource:0}: task 9909741a560ecca30e6d915a7919753da07896870ef94bb47c81c737d792c537 not found: not found Feb 9 10:00:26.392475 env[1381]: time="2024-02-09T10:00:26.392269082Z" level=info msg="StopPodSandbox for \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\"" Feb 9 10:00:26.392475 env[1381]: time="2024-02-09T10:00:26.392366009Z" level=info msg="TearDown network for sandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" successfully" Feb 9 10:00:26.392475 env[1381]: time="2024-02-09T10:00:26.392398412Z" level=info msg="StopPodSandbox for \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" returns successfully" Feb 9 10:00:26.393277 env[1381]: time="2024-02-09T10:00:26.393247772Z" level=info msg="RemovePodSandbox for \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\"" Feb 9 10:00:26.393333 env[1381]: time="2024-02-09T10:00:26.393292511Z" level=info msg="Forcibly stopping sandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\"" Feb 9 10:00:26.393395 env[1381]: time="2024-02-09T10:00:26.393373658Z" level=info msg="TearDown network for sandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" successfully" Feb 9 10:00:26.405212 env[1381]: time="2024-02-09T10:00:26.405169933Z" level=info msg="RemovePodSandbox \"771fd8e05298dc45657870e9f862c22280db476d89af3ccddc98dded4dbe28ff\" returns successfully" Feb 9 10:00:26.405742 env[1381]: time="2024-02-09T10:00:26.405713530Z" level=info msg="StopPodSandbox for \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\"" Feb 9 10:00:26.405857 env[1381]: time="2024-02-09T10:00:26.405803929Z" level=info msg="TearDown network for sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" successfully" Feb 9 10:00:26.405857 env[1381]: time="2024-02-09T10:00:26.405839496Z" level=info msg="StopPodSandbox for \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" returns successfully" Feb 9 10:00:26.406141 env[1381]: time="2024-02-09T10:00:26.406113978Z" level=info msg="RemovePodSandbox for \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\"" Feb 9 10:00:26.406212 env[1381]: time="2024-02-09T10:00:26.406143016Z" level=info msg="Forcibly stopping sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\"" Feb 9 10:00:26.406243 env[1381]: time="2024-02-09T10:00:26.406219678Z" level=info msg="TearDown network for sandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" successfully" Feb 9 10:00:26.416519 env[1381]: time="2024-02-09T10:00:26.416461583Z" level=info msg="RemovePodSandbox \"0367eb93193bf559e8d0b234d914337e004efac222a1afe4b232a8e54c58e422\" returns successfully" Feb 9 10:00:26.417049 env[1381]: time="2024-02-09T10:00:26.416898479Z" level=info msg="StopPodSandbox for \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\"" Feb 9 10:00:26.417049 env[1381]: time="2024-02-09T10:00:26.416967851Z" level=info msg="TearDown network for sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" successfully" Feb 9 10:00:26.417049 env[1381]: time="2024-02-09T10:00:26.416995487Z" level=info msg="StopPodSandbox for \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" returns successfully" Feb 9 10:00:26.417241 env[1381]: time="2024-02-09T10:00:26.417209810Z" level=info msg="RemovePodSandbox for \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\"" Feb 9 10:00:26.417282 env[1381]: time="2024-02-09T10:00:26.417239168Z" level=info msg="Forcibly stopping sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\"" Feb 9 10:00:26.417324 env[1381]: time="2024-02-09T10:00:26.417298126Z" level=info msg="TearDown network for sandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" successfully" Feb 9 10:00:26.426108 env[1381]: time="2024-02-09T10:00:26.426063565Z" level=info msg="RemovePodSandbox \"814107a601155bc386a48371bb24dec0f47e939a915a6a666600de556d03302b\" returns successfully" Feb 9 10:00:50.105084 systemd[1]: cri-containerd-a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd.scope: Deactivated successfully. Feb 9 10:00:50.105384 systemd[1]: cri-containerd-a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd.scope: Consumed 3.119s CPU time. Feb 9 10:00:50.123411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd-rootfs.mount: Deactivated successfully. Feb 9 10:00:50.146182 env[1381]: time="2024-02-09T10:00:50.146133702Z" level=info msg="shim disconnected" id=a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd Feb 9 10:00:50.146730 env[1381]: time="2024-02-09T10:00:50.146708331Z" level=warning msg="cleaning up after shim disconnected" id=a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd namespace=k8s.io Feb 9 10:00:50.146826 env[1381]: time="2024-02-09T10:00:50.146812282Z" level=info msg="cleaning up dead shim" Feb 9 10:00:50.154519 env[1381]: time="2024-02-09T10:00:50.154475680Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5546 runtime=io.containerd.runc.v2\n" Feb 9 10:00:50.985090 kubelet[2474]: I0209 10:00:50.985064 2474 scope.go:115] "RemoveContainer" containerID="a3154af0fde723640117a83e638f762f4b4d2dc3e37cdb9887e563ea231b5fdd" Feb 9 10:00:50.987444 env[1381]: time="2024-02-09T10:00:50.987386149Z" level=info msg="CreateContainer within sandbox \"7159e517c0d549a07dd4d3357dad723a60ef9bf696c2f08191d1b9bc57d0caa5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 10:00:51.016092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961894544.mount: Deactivated successfully. Feb 9 10:00:51.027150 env[1381]: time="2024-02-09T10:00:51.027100636Z" level=info msg="CreateContainer within sandbox \"7159e517c0d549a07dd4d3357dad723a60ef9bf696c2f08191d1b9bc57d0caa5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ed7173b4c64bc17bd0bc1973a8993b00fb26996ac58bbbd44d6bad50a15315b0\"" Feb 9 10:00:51.027802 env[1381]: time="2024-02-09T10:00:51.027781366Z" level=info msg="StartContainer for \"ed7173b4c64bc17bd0bc1973a8993b00fb26996ac58bbbd44d6bad50a15315b0\"" Feb 9 10:00:51.046235 systemd[1]: Started cri-containerd-ed7173b4c64bc17bd0bc1973a8993b00fb26996ac58bbbd44d6bad50a15315b0.scope. Feb 9 10:00:51.049855 kubelet[2474]: E0209 10:00:51.048678 2474 controller.go:189] failed to update lease, error: Put "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8b452ef1bd?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 10:00:51.089023 env[1381]: time="2024-02-09T10:00:51.088954066Z" level=info msg="StartContainer for \"ed7173b4c64bc17bd0bc1973a8993b00fb26996ac58bbbd44d6bad50a15315b0\" returns successfully" Feb 9 10:00:51.529388 kubelet[2474]: E0209 10:00:51.529363 2474 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.40:41604->10.200.20.20:2379: read: connection timed out Feb 9 10:00:51.534628 systemd[1]: cri-containerd-8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332.scope: Deactivated successfully. Feb 9 10:00:51.534913 systemd[1]: cri-containerd-8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332.scope: Consumed 1.652s CPU time. Feb 9 10:00:51.556133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332-rootfs.mount: Deactivated successfully. Feb 9 10:00:51.578996 env[1381]: time="2024-02-09T10:00:51.578950954Z" level=info msg="shim disconnected" id=8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332 Feb 9 10:00:51.579546 env[1381]: time="2024-02-09T10:00:51.579523172Z" level=warning msg="cleaning up after shim disconnected" id=8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332 namespace=k8s.io Feb 9 10:00:51.579632 env[1381]: time="2024-02-09T10:00:51.579619236Z" level=info msg="cleaning up dead shim" Feb 9 10:00:51.590502 env[1381]: time="2024-02-09T10:00:51.590380987Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5605 runtime=io.containerd.runc.v2\n" Feb 9 10:00:51.989652 kubelet[2474]: I0209 10:00:51.989628 2474 scope.go:115] "RemoveContainer" containerID="8a374a194c5f66b9e47014f930a9e6d1569999b7b9201e5c90c4466aed6a8332" Feb 9 10:00:51.991629 env[1381]: time="2024-02-09T10:00:51.991594291Z" level=info msg="CreateContainer within sandbox \"93b73f258319d1ad597285a0d6cbb697d851f4de9eb98ac9ca5b47735e0f3914\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 10:00:52.021359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367669183.mount: Deactivated successfully. Feb 9 10:00:52.043451 env[1381]: time="2024-02-09T10:00:52.043357061Z" level=info msg="CreateContainer within sandbox \"93b73f258319d1ad597285a0d6cbb697d851f4de9eb98ac9ca5b47735e0f3914\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d87158152b69426c614297a2078413fe2094b665fc50f808fb63003573d6d270\"" Feb 9 10:00:52.044053 env[1381]: time="2024-02-09T10:00:52.044029454Z" level=info msg="StartContainer for \"d87158152b69426c614297a2078413fe2094b665fc50f808fb63003573d6d270\"" Feb 9 10:00:52.059141 systemd[1]: Started cri-containerd-d87158152b69426c614297a2078413fe2094b665fc50f808fb63003573d6d270.scope. Feb 9 10:00:52.112868 env[1381]: time="2024-02-09T10:00:52.112818258Z" level=info msg="StartContainer for \"d87158152b69426c614297a2078413fe2094b665fc50f808fb63003573d6d270\" returns successfully" Feb 9 10:00:52.124231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811594901.mount: Deactivated successfully. Feb 9 10:01:01.530129 kubelet[2474]: E0209 10:01:01.530096 2474 controller.go:189] failed to update lease, error: Put "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8b452ef1bd?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 10:01:11.531208 kubelet[2474]: E0209 10:01:11.531172 2474 controller.go:189] failed to update lease, error: Put "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-8b452ef1bd?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 10:01:13.310433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.327152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.343558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.360149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.375985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.392048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.392301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.410551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.410709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.429270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.429536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.448467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.448675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.466526 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.466730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.485171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.485442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.503289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.503548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.521674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.521867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.540676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.540893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.560046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.560282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.579472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.579717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.608613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.609021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.609129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.628301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.628627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.647240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.647499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.666380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.666663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.685383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.685674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.704670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.704916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.723846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.724089 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.743094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.743375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.762530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.762792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.783917 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.830595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.830751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.830875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.830972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.831076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.831457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.851522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.851776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.870246 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.870497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.889334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.889763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.908140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.908409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.927005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.927268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.946230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.946540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.965317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.965658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.994825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.995103 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:13.995202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.015794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.034626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.034764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.046726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.046995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.067403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.095350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.095509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.095613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.107666 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.107921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.127026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.127265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.155791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.156062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.156167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.184890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.185221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.185337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.194495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.213316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.218677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.232030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.232280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.260562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.260849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.260971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.279670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.279919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.298295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.298545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.317607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.317859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.336494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.336733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.345990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.365592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.365844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.384899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.404121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.404236 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.404336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.432183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.432468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.432591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.450983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.451235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.470234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.470525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.489277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.489576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.508098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.508317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.527144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.546234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.546346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.546494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.564729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.564963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.583814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.584100 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.609439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.609696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.622794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.623028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.641717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.642022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.677651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.677911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.678017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.690326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.690588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.709354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.738293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.738444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.738548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.738636 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.757370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.757793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.776096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:01:14.776348 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001