Jun 25 14:50:14.242275 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 14:50:14.242294 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:50:14.242302 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 25 14:50:14.242309 kernel: printk: bootconsole [pl11] enabled Jun 25 14:50:14.242314 kernel: efi: EFI v2.70 by EDK II Jun 25 14:50:14.242319 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e94ae18 Jun 25 14:50:14.242326 kernel: random: crng init done Jun 25 14:50:14.242331 kernel: ACPI: Early table checksum verification disabled Jun 25 14:50:14.242336 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jun 25 14:50:14.242342 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242347 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242353 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 14:50:14.242358 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242364 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242370 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242376 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242382 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242389 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242394 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 25 14:50:14.242400 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:50:14.242406 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 25 14:50:14.242411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 25 14:50:14.242417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 25 14:50:14.242422 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 25 14:50:14.242428 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 25 14:50:14.242434 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 25 14:50:14.242439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 25 14:50:14.242446 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 25 14:50:14.242452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 25 14:50:14.242457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 25 14:50:14.242463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 25 14:50:14.242469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 25 14:50:14.242474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 25 14:50:14.242480 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jun 25 14:50:14.242485 kernel: Zone ranges: Jun 25 14:50:14.242491 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 25 14:50:14.242496 kernel: DMA32 empty Jun 25 14:50:14.242502 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 14:50:14.242507 kernel: Movable zone start for each node Jun 25 14:50:14.242515 kernel: Early memory node ranges Jun 25 14:50:14.242523 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 25 14:50:14.242529 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jun 25 14:50:14.242535 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jun 25 14:50:14.242541 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jun 25 14:50:14.242548 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jun 25 14:50:14.242554 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jun 25 14:50:14.242560 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jun 25 14:50:14.242566 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jun 25 14:50:14.242571 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 14:50:14.242577 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 25 14:50:14.242583 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 25 14:50:14.242589 kernel: psci: probing for conduit method from ACPI. Jun 25 14:50:14.242595 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 14:50:14.242601 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:50:14.242607 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 25 14:50:14.242613 kernel: psci: SMC Calling Convention v1.4 Jun 25 14:50:14.242621 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 25 14:50:14.242627 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 25 14:50:14.242633 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:50:14.242639 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:50:14.242645 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 14:50:14.242651 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:50:14.242657 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:50:14.242663 kernel: CPU features: detected: Hardware dirty bit management Jun 25 14:50:14.242669 kernel: CPU features: detected: Spectre-BHB Jun 25 14:50:14.242675 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:50:14.242681 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:50:14.242688 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 14:50:14.242694 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 25 14:50:14.242700 kernel: alternatives: applying boot alternatives Jun 25 14:50:14.242706 kernel: Fallback order for Node 0: 0 Jun 25 14:50:14.242712 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 25 14:50:14.242717 kernel: Policy zone: Normal Jun 25 14:50:14.242725 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:50:14.242731 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:50:14.242737 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:50:14.242743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:50:14.242749 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:50:14.242756 kernel: software IO TLB: area num 2. Jun 25 14:50:14.242763 kernel: software IO TLB: mapped [mem 0x000000003a94a000-0x000000003e94a000] (64MB) Jun 25 14:50:14.242769 kernel: Memory: 3991396K/4194160K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 202764K reserved, 0K cma-reserved) Jun 25 14:50:14.242775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 14:50:14.242781 kernel: trace event string verifier disabled Jun 25 14:50:14.242787 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:50:14.242793 kernel: rcu: RCU event tracing is enabled. Jun 25 14:50:14.242800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 14:50:14.242806 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:50:14.242812 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:50:14.242817 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:50:14.242825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 14:50:14.242831 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:50:14.242837 kernel: GICv3: 960 SPIs implemented Jun 25 14:50:14.242843 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:50:14.242849 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:50:14.242854 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 14:50:14.242860 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 25 14:50:14.242866 kernel: ITS: No ITS available, not enabling LPIs Jun 25 14:50:14.242873 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:50:14.242879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:50:14.242885 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 14:50:14.242892 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 14:50:14.242899 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 14:50:14.242905 kernel: Console: colour dummy device 80x25 Jun 25 14:50:14.242912 kernel: printk: console [tty1] enabled Jun 25 14:50:14.242918 kernel: ACPI: Core revision 20220331 Jun 25 14:50:14.242935 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 14:50:14.242942 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:50:14.242949 kernel: LSM: Security Framework initializing Jun 25 14:50:14.242955 kernel: SELinux: Initializing. Jun 25 14:50:14.242962 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:50:14.242969 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:50:14.242975 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:50:14.242981 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:50:14.242988 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:50:14.242994 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:50:14.243000 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 25 14:50:14.243006 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jun 25 14:50:14.243013 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 14:50:14.243025 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:50:14.243032 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:50:14.243038 kernel: Remapping and enabling EFI services. Jun 25 14:50:14.243045 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:50:14.243052 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:50:14.243059 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 25 14:50:14.243066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:50:14.243072 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 14:50:14.243078 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 14:50:14.243086 kernel: SMP: Total of 2 processors activated. Jun 25 14:50:14.243092 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:50:14.243099 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 25 14:50:14.243106 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 14:50:14.243112 kernel: CPU features: detected: CRC32 instructions Jun 25 14:50:14.243119 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 14:50:14.243125 kernel: CPU features: detected: LSE atomic instructions Jun 25 14:50:14.243132 kernel: CPU features: detected: Privileged Access Never Jun 25 14:50:14.243138 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:50:14.243146 kernel: alternatives: applying system-wide alternatives Jun 25 14:50:14.243152 kernel: devtmpfs: initialized Jun 25 14:50:14.243159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:50:14.243165 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 14:50:14.243172 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:50:14.243178 kernel: SMBIOS 3.1.0 present. Jun 25 14:50:14.243185 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jun 25 14:50:14.243191 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:50:14.243198 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:50:14.243206 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:50:14.243212 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:50:14.243219 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:50:14.243226 kernel: audit: type=2000 audit(0.048:1): state=initialized audit_enabled=0 res=1 Jun 25 14:50:14.243232 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:50:14.243238 kernel: cpuidle: using governor menu Jun 25 14:50:14.243245 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:50:14.243251 kernel: ASID allocator initialised with 32768 entries Jun 25 14:50:14.243258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:50:14.243265 kernel: Serial: AMBA PL011 UART driver Jun 25 14:50:14.243272 kernel: KASLR enabled Jun 25 14:50:14.243278 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:50:14.243285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:50:14.243291 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:50:14.243297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:50:14.243304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:50:14.243310 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:50:14.243317 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:50:14.243324 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:50:14.243331 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:50:14.243337 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:50:14.243344 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:50:14.243350 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:50:14.243357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:50:14.243363 kernel: ACPI: Interpreter enabled Jun 25 14:50:14.243370 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:50:14.243376 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 25 14:50:14.243384 kernel: printk: console [ttyAMA0] enabled Jun 25 14:50:14.243391 kernel: printk: bootconsole [pl11] disabled Jun 25 14:50:14.243397 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 25 14:50:14.243404 kernel: iommu: Default domain type: Translated Jun 25 14:50:14.243410 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:50:14.243417 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:50:14.243424 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:50:14.243430 kernel: PTP clock support registered Jun 25 14:50:14.243436 kernel: Registered efivars operations Jun 25 14:50:14.243444 kernel: No ACPI PMU IRQ for CPU0 Jun 25 14:50:14.243451 kernel: No ACPI PMU IRQ for CPU1 Jun 25 14:50:14.243457 kernel: vgaarb: loaded Jun 25 14:50:14.243463 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:50:14.243470 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:50:14.243476 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:50:14.243483 kernel: pnp: PnP ACPI init Jun 25 14:50:14.243489 kernel: pnp: PnP ACPI: found 0 devices Jun 25 14:50:14.243496 kernel: NET: Registered PF_INET protocol family Jun 25 14:50:14.243504 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:50:14.243510 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:50:14.243517 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:50:14.243523 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:50:14.243530 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:50:14.243536 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:50:14.243543 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:50:14.243550 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:50:14.243556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:50:14.243564 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:50:14.243570 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 25 14:50:14.243577 kernel: kvm [1]: HYP mode not available Jun 25 14:50:14.243583 kernel: Initialise system trusted keyrings Jun 25 14:50:14.243590 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:50:14.243596 kernel: Key type asymmetric registered Jun 25 14:50:14.243603 kernel: Asymmetric key parser 'x509' registered Jun 25 14:50:14.243609 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:50:14.243616 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:50:14.243624 kernel: io scheduler mq-deadline registered Jun 25 14:50:14.243630 kernel: io scheduler kyber registered Jun 25 14:50:14.243636 kernel: io scheduler bfq registered Jun 25 14:50:14.243649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:50:14.243655 kernel: thunder_xcv, ver 1.0 Jun 25 14:50:14.243662 kernel: thunder_bgx, ver 1.0 Jun 25 14:50:14.243668 kernel: nicpf, ver 1.0 Jun 25 14:50:14.243674 kernel: nicvf, ver 1.0 Jun 25 14:50:14.243801 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:50:14.243866 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:50:13 UTC (1719327013) Jun 25 14:50:14.243875 kernel: efifb: probing for efifb Jun 25 14:50:14.243882 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 14:50:14.243889 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 14:50:14.243895 kernel: efifb: scrolling: redraw Jun 25 14:50:14.243902 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 14:50:14.243908 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 14:50:14.243915 kernel: fb0: EFI VGA frame buffer device Jun 25 14:50:14.243923 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 25 14:50:14.243941 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:50:14.243947 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:50:14.243954 kernel: Segment Routing with IPv6 Jun 25 14:50:14.243960 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:50:14.243967 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:50:14.243973 kernel: Key type dns_resolver registered Jun 25 14:50:14.243980 kernel: registered taskstats version 1 Jun 25 14:50:14.243987 kernel: Loading compiled-in X.509 certificates Jun 25 14:50:14.243994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:50:14.244001 kernel: Key type .fscrypt registered Jun 25 14:50:14.244007 kernel: Key type fscrypt-provisioning registered Jun 25 14:50:14.244014 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:50:14.244020 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:50:14.244026 kernel: ima: No architecture policies found Jun 25 14:50:14.244033 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:50:14.244039 kernel: clk: Disabling unused clocks Jun 25 14:50:14.244045 kernel: Freeing unused kernel memory: 34688K Jun 25 14:50:14.244053 kernel: Run /init as init process Jun 25 14:50:14.244060 kernel: with arguments: Jun 25 14:50:14.244066 kernel: /init Jun 25 14:50:14.244072 kernel: with environment: Jun 25 14:50:14.244078 kernel: HOME=/ Jun 25 14:50:14.244084 kernel: TERM=linux Jun 25 14:50:14.244091 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:50:14.244099 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:50:14.244109 systemd[1]: Detected virtualization microsoft. Jun 25 14:50:14.244116 systemd[1]: Detected architecture arm64. Jun 25 14:50:14.244123 systemd[1]: Running in initrd. Jun 25 14:50:14.244130 systemd[1]: No hostname configured, using default hostname. Jun 25 14:50:14.244137 systemd[1]: Hostname set to . Jun 25 14:50:14.244144 systemd[1]: Initializing machine ID from random generator. Jun 25 14:50:14.244151 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:50:14.244158 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:50:14.244167 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:50:14.244174 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:50:14.244180 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:50:14.244187 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:50:14.244194 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:50:14.244202 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:50:14.244209 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:50:14.244217 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:50:14.244224 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:50:14.244231 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:50:14.244239 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:50:14.244246 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:50:14.244253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:50:14.244260 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:50:14.244267 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:50:14.244274 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:50:14.244283 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:50:14.244290 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:50:14.244297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:50:14.244304 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:50:14.244311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:50:14.244322 systemd-journald[208]: Journal started Jun 25 14:50:14.244360 systemd-journald[208]: Runtime Journal (/run/log/journal/15d4696da9684e65a3aac0f38097f1cf) is 8.0M, max 78.6M, 70.6M free. Jun 25 14:50:14.233414 systemd-modules-load[209]: Inserted module 'overlay' Jun 25 14:50:14.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.290944 kernel: audit: type=1130 audit(1719327014.260:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.290994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:50:14.291004 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:50:14.300235 kernel: Bridge firewalling registered Jun 25 14:50:14.300390 systemd-modules-load[209]: Inserted module 'br_netfilter' Jun 25 14:50:14.343961 kernel: audit: type=1130 audit(1719327014.306:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.343986 kernel: SCSI subsystem initialized Jun 25 14:50:14.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.323439 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:50:14.384402 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:50:14.384424 kernel: audit: type=1130 audit(1719327014.358:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.384445 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:50:14.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.378921 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:50:14.412011 kernel: audit: type=1130 audit(1719327014.390:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.413343 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:50:14.422715 systemd-modules-load[209]: Inserted module 'dm_multipath' Jun 25 14:50:14.428415 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:50:14.446981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:50:14.454688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:50:14.509066 kernel: audit: type=1130 audit(1719327014.482:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.463693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:50:14.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.502158 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:50:14.538358 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:50:14.576078 kernel: audit: type=1130 audit(1719327014.515:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.576104 kernel: audit: type=1130 audit(1719327014.551:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.552353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:50:14.605428 kernel: audit: type=1130 audit(1719327014.581:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.609127 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:50:14.619000 audit: BPF prog-id=6 op=LOAD Jun 25 14:50:14.627465 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:50:14.667045 kernel: audit: type=1334 audit(1719327014.619:10): prog-id=6 op=LOAD Jun 25 14:50:14.639395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:50:14.675052 dracut-cmdline[229]: dracut-dracut-053 Jun 25 14:50:14.675052 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:50:14.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.667778 systemd-resolved[234]: Positive Trust Anchors: Jun 25 14:50:14.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.667786 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:50:14.667813 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:50:14.670267 systemd-resolved[234]: Defaulting to hostname 'linux'. Jun 25 14:50:14.671058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:50:14.688279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:50:14.731840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:50:14.857955 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:50:14.865952 kernel: iscsi: registered transport (tcp) Jun 25 14:50:14.884932 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:50:14.884949 kernel: QLogic iSCSI HBA Driver Jun 25 14:50:14.919594 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:50:14.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:14.933357 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:50:14.994955 kernel: raid6: neonx8 gen() 15763 MB/s Jun 25 14:50:15.013937 kernel: raid6: neonx4 gen() 15663 MB/s Jun 25 14:50:15.033935 kernel: raid6: neonx2 gen() 13223 MB/s Jun 25 14:50:15.055936 kernel: raid6: neonx1 gen() 10494 MB/s Jun 25 14:50:15.076936 kernel: raid6: int64x8 gen() 6979 MB/s Jun 25 14:50:15.097940 kernel: raid6: int64x4 gen() 7327 MB/s Jun 25 14:50:15.119936 kernel: raid6: int64x2 gen() 6127 MB/s Jun 25 14:50:15.144872 kernel: raid6: int64x1 gen() 5053 MB/s Jun 25 14:50:15.144882 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Jun 25 14:50:15.170117 kernel: raid6: .... xor() 11871 MB/s, rmw enabled Jun 25 14:50:15.170128 kernel: raid6: using neon recovery algorithm Jun 25 14:50:15.179939 kernel: xor: measuring software checksum speed Jun 25 14:50:15.188552 kernel: 8regs : 19859 MB/sec Jun 25 14:50:15.188564 kernel: 32regs : 19678 MB/sec Jun 25 14:50:15.205023 kernel: arm64_neon : 27098 MB/sec Jun 25 14:50:15.205033 kernel: xor: using function: arm64_neon (27098 MB/sec) Jun 25 14:50:15.261945 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:50:15.272542 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:50:15.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:15.284000 audit: BPF prog-id=7 op=LOAD Jun 25 14:50:15.284000 audit: BPF prog-id=8 op=LOAD Jun 25 14:50:15.288230 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:50:15.318353 systemd-udevd[408]: Using default interface naming scheme 'v252'. Jun 25 14:50:15.326290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:50:15.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:15.348085 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:50:15.369164 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jun 25 14:50:15.400023 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:50:15.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:15.416372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:50:15.449672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:50:15.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:15.522990 kernel: hv_vmbus: Vmbus version:5.3 Jun 25 14:50:15.525956 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 14:50:15.529045 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 14:50:15.529091 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 14:50:15.529101 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 14:50:15.529118 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 14:50:15.534958 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 14:50:15.541216 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 14:50:15.541265 kernel: scsi host1: storvsc_host_t Jun 25 14:50:15.560947 kernel: scsi host0: storvsc_host_t Jun 25 14:50:15.599965 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 14:50:15.607942 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 14:50:15.626855 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 14:50:15.628512 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 14:50:15.628526 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 14:50:15.646212 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 14:50:15.668855 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 14:50:15.669017 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 14:50:15.669136 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 14:50:15.669252 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 14:50:15.669339 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:50:15.669349 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 14:50:15.719199 kernel: hv_netvsc 0022487d-d18e-0022-487d-d18e0022487d eth0: VF slot 1 added Jun 25 14:50:15.727956 kernel: hv_vmbus: registering driver hv_pci Jun 25 14:50:15.737571 kernel: hv_pci 3635f06c-f5ee-4b13-abb5-b9a0bb422bc8: PCI VMBus probing: Using version 0x10004 Jun 25 14:50:15.811407 kernel: hv_pci 3635f06c-f5ee-4b13-abb5-b9a0bb422bc8: PCI host bridge to bus f5ee:00 Jun 25 14:50:15.811522 kernel: pci_bus f5ee:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 25 14:50:15.811622 kernel: pci_bus f5ee:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 14:50:15.811695 kernel: pci f5ee:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 25 14:50:15.811788 kernel: pci f5ee:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 14:50:15.811865 kernel: pci f5ee:00:02.0: enabling Extended Tags Jun 25 14:50:15.811976 kernel: pci f5ee:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f5ee:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 25 14:50:15.812061 kernel: pci_bus f5ee:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 14:50:15.812135 kernel: pci f5ee:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 14:50:15.848651 kernel: mlx5_core f5ee:00:02.0: enabling device (0000 -> 0002) Jun 25 14:50:16.112167 kernel: mlx5_core f5ee:00:02.0: firmware version: 16.30.1284 Jun 25 14:50:16.112285 kernel: mlx5_core f5ee:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jun 25 14:50:16.112375 kernel: hv_netvsc 0022487d-d18e-0022-487d-d18e0022487d eth0: VF registering: eth1 Jun 25 14:50:16.112462 kernel: mlx5_core f5ee:00:02.0 eth1: joined to eth0 Jun 25 14:50:16.112318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 14:50:16.135959 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (460) Jun 25 14:50:16.145957 kernel: mlx5_core f5ee:00:02.0 enP62958s1: renamed from eth1 Jun 25 14:50:16.156479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 14:50:16.351979 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (477) Jun 25 14:50:16.363689 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 14:50:16.381880 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 14:50:16.389351 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 14:50:16.421372 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:50:16.440952 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:50:16.449954 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:50:17.458258 disk-uuid[546]: The operation has completed successfully. Jun 25 14:50:17.464212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:50:17.522109 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:50:17.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:17.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:17.522205 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:50:17.535700 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:50:17.547838 sh[658]: Success Jun 25 14:50:17.587945 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:50:17.791363 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:50:17.797649 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:50:17.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:17.810273 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:50:17.846020 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:50:17.846071 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:50:17.853292 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:50:17.858696 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:50:17.863337 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:50:18.161570 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:50:18.167119 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:50:18.184362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:50:18.192787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:50:18.228630 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:50:18.228688 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:50:18.233409 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:50:18.275132 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:50:18.287983 kernel: BTRFS info (device sda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:50:18.294531 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:50:18.331028 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 14:50:18.331061 kernel: audit: type=1130 audit(1719327018.300:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.311424 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:50:18.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.357948 kernel: audit: type=1130 audit(1719327018.332:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.359158 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:50:18.364000 audit: BPF prog-id=9 op=LOAD Jun 25 14:50:18.377920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:50:18.390059 kernel: audit: type=1334 audit(1719327018.364:25): prog-id=9 op=LOAD Jun 25 14:50:18.409286 systemd-networkd[844]: lo: Link UP Jun 25 14:50:18.409297 systemd-networkd[844]: lo: Gained carrier Jun 25 14:50:18.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.410128 systemd-networkd[844]: Enumeration completed Jun 25 14:50:18.449784 kernel: audit: type=1130 audit(1719327018.422:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.412935 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:50:18.413522 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:50:18.413526 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:50:18.423160 systemd[1]: Reached target network.target - Network. Jun 25 14:50:18.473952 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:50:18.484549 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:50:18.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.516958 kernel: audit: type=1130 audit(1719327018.495:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.522142 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:50:18.531324 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:50:18.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.555821 iscsid[849]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:50:18.555821 iscsid[849]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jun 25 14:50:18.555821 iscsid[849]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:50:18.555821 iscsid[849]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:50:18.555821 iscsid[849]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:50:18.555821 iscsid[849]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:50:18.555821 iscsid[849]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:50:18.694687 kernel: audit: type=1130 audit(1719327018.536:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.694719 kernel: audit: type=1130 audit(1719327018.575:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.694736 kernel: mlx5_core f5ee:00:02.0 enP62958s1: Link up Jun 25 14:50:18.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.537715 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:50:18.562226 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:50:18.730614 kernel: hv_netvsc 0022487d-d18e-0022-487d-d18e0022487d eth0: Data path switched to VF: enP62958s1 Jun 25 14:50:18.730780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:50:18.576474 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:50:18.763628 kernel: audit: type=1130 audit(1719327018.737:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:18.622288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:50:18.630226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:50:18.685132 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:50:18.717641 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:50:18.737423 systemd-networkd[844]: enP62958s1: Link UP Jun 25 14:50:18.737528 systemd-networkd[844]: eth0: Link UP Jun 25 14:50:18.737673 systemd-networkd[844]: eth0: Gained carrier Jun 25 14:50:18.737682 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:50:18.763203 systemd-networkd[844]: enP62958s1: Gained carrier Jun 25 14:50:18.786010 systemd-networkd[844]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:50:19.414976 ignition[843]: Ignition 2.15.0 Jun 25 14:50:19.415967 ignition[843]: Stage: fetch-offline Jun 25 14:50:19.417523 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:50:19.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:19.416033 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:50:19.466975 kernel: audit: type=1130 audit(1719327019.429:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:19.463560 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 14:50:19.416042 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:50:19.416171 ignition[843]: parsed url from cmdline: "" Jun 25 14:50:19.416175 ignition[843]: no config URL provided Jun 25 14:50:19.416179 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:50:19.416186 ignition[843]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:50:19.416191 ignition[843]: failed to fetch config: resource requires networking Jun 25 14:50:19.416445 ignition[843]: Ignition finished successfully Jun 25 14:50:19.490598 ignition[869]: Ignition 2.15.0 Jun 25 14:50:19.490605 ignition[869]: Stage: fetch Jun 25 14:50:19.490714 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:50:19.490723 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:50:19.490817 ignition[869]: parsed url from cmdline: "" Jun 25 14:50:19.490821 ignition[869]: no config URL provided Jun 25 14:50:19.490826 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:50:19.490833 ignition[869]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:50:19.490860 ignition[869]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 14:50:19.598784 ignition[869]: GET result: OK Jun 25 14:50:19.598860 ignition[869]: config has been read from IMDS userdata Jun 25 14:50:19.598904 ignition[869]: parsing config with SHA512: 1792564da702951f6fabaa3f21b5d9ce6e18f870f60b9e2013b7fbeae76f5afab5a91f7967b9fd1bf03c12a6cb17c5d37fe83cbfbe4ebbc564fd17a7f722707d Jun 25 14:50:19.603060 unknown[869]: fetched base config from "system" Jun 25 14:50:19.603506 ignition[869]: fetch: fetch complete Jun 25 14:50:19.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:19.603068 unknown[869]: fetched base config from "system" Jun 25 14:50:19.647919 kernel: audit: type=1130 audit(1719327019.620:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:19.603512 ignition[869]: fetch: fetch passed Jun 25 14:50:19.603074 unknown[869]: fetched user config from "azure" Jun 25 14:50:19.603559 ignition[869]: Ignition finished successfully Jun 25 14:50:19.609282 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 14:50:19.648134 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:50:19.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:19.668096 ignition[875]: Ignition 2.15.0 Jun 25 14:50:19.674667 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:50:19.668103 ignition[875]: Stage: kargs Jun 25 14:50:19.693153 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:50:19.668237 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:50:19.668247 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:50:19.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:19.722778 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:50:19.669354 ignition[875]: kargs: kargs passed Jun 25 14:50:19.731767 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:50:19.669413 ignition[875]: Ignition finished successfully Jun 25 14:50:19.745848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:50:19.719664 ignition[881]: Ignition 2.15.0 Jun 25 14:50:19.758761 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:50:19.719671 ignition[881]: Stage: disks Jun 25 14:50:19.769038 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:50:19.719814 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:50:19.780968 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:50:19.719823 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:50:19.816787 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:50:19.721175 ignition[881]: disks: disks passed Jun 25 14:50:19.721231 ignition[881]: Ignition finished successfully Jun 25 14:50:19.906168 systemd-fsck[889]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 14:50:19.916287 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:50:19.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:19.934067 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:50:19.991981 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:50:19.992548 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:50:19.997340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:50:20.046041 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:50:20.056400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:50:20.073845 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (898) Jun 25 14:50:20.067643 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 14:50:20.079576 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:50:20.123097 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:50:20.123119 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:50:20.123128 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:50:20.079628 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:50:20.100744 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:50:20.127896 systemd-networkd[844]: eth0: Gained IPv6LL Jun 25 14:50:20.140139 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:50:20.158200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:50:20.829947 coreos-metadata[900]: Jun 25 14:50:20.829 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 14:50:20.839450 coreos-metadata[900]: Jun 25 14:50:20.838 INFO Fetch successful Jun 25 14:50:20.839450 coreos-metadata[900]: Jun 25 14:50:20.839 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 14:50:20.859678 coreos-metadata[900]: Jun 25 14:50:20.859 INFO Fetch successful Jun 25 14:50:20.876198 coreos-metadata[900]: Jun 25 14:50:20.876 INFO wrote hostname ci-3815.2.4-a-39232a46a6 to /sysroot/etc/hostname Jun 25 14:50:20.886518 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 14:50:20.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:21.039558 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:50:21.091340 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:50:21.100683 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:50:21.109548 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:50:22.122467 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:50:22.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:22.142353 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:50:22.174455 kernel: BTRFS info (device sda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:50:22.151866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:50:22.167779 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:50:22.195865 ignition[1013]: INFO : Ignition 2.15.0 Jun 25 14:50:22.200481 ignition[1013]: INFO : Stage: mount Jun 25 14:50:22.200481 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:50:22.200481 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:50:22.200481 ignition[1013]: INFO : mount: mount passed Jun 25 14:50:22.200481 ignition[1013]: INFO : Ignition finished successfully Jun 25 14:50:22.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:22.198637 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:50:22.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:22.226114 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:50:22.237996 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:50:22.254855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:50:22.292419 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1023) Jun 25 14:50:22.292467 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:50:22.299841 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:50:22.304466 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:50:22.308143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:50:22.339670 ignition[1041]: INFO : Ignition 2.15.0 Jun 25 14:50:22.344489 ignition[1041]: INFO : Stage: files Jun 25 14:50:22.344489 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:50:22.344489 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:50:22.344489 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:50:22.378596 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:50:22.378596 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:50:22.537277 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:50:22.546050 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:50:22.546050 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:50:22.546050 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:50:22.546050 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:50:22.537751 unknown[1041]: wrote ssh authorized keys file for user: core Jun 25 14:50:22.871658 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 14:50:23.091567 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 14:50:23.103101 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jun 25 14:50:23.545134 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 14:50:23.771001 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jun 25 14:50:23.771001 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 14:50:23.811351 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:50:23.823156 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:50:23.823156 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 14:50:23.852219 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:50:23.852247 kernel: audit: type=1130 audit(1719327023.840:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.852304 ignition[1041]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:50:23.852304 ignition[1041]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:50:23.852304 ignition[1041]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:50:23.852304 ignition[1041]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:50:23.852304 ignition[1041]: INFO : files: files passed Jun 25 14:50:23.852304 ignition[1041]: INFO : Ignition finished successfully Jun 25 14:50:23.970201 kernel: audit: type=1130 audit(1719327023.914:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.970227 kernel: audit: type=1131 audit(1719327023.942:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.835171 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:50:23.876500 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:50:23.885410 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:50:23.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.898831 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:50:24.023365 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:50:24.023365 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:50:24.048316 kernel: audit: type=1130 audit(1719327023.991:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:23.899003 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:50:24.055030 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:50:23.984846 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:50:24.017567 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:50:24.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.048110 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:50:24.134147 kernel: audit: type=1130 audit(1719327024.085:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.134172 kernel: audit: type=1131 audit(1719327024.109:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.073512 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:50:24.073640 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:50:24.110024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:50:24.141267 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:50:24.153558 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:50:24.182767 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:50:24.204649 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:50:24.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.240964 kernel: audit: type=1130 audit(1719327024.212:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.247129 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:50:24.264368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:50:24.271363 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:50:24.285280 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:50:24.298593 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:50:24.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.298712 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:50:24.352720 kernel: audit: type=1131 audit(1719327024.312:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.336737 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:50:24.350084 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:50:24.359178 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:50:24.372626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:50:24.386020 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:50:24.400278 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:50:24.413393 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:50:24.426973 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:50:24.439617 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:50:24.453511 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:50:24.466576 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:50:24.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.477874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:50:24.522039 kernel: audit: type=1131 audit(1719327024.492:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.477995 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:50:24.515997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:50:24.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.529254 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:50:24.529360 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:50:24.587424 kernel: audit: type=1131 audit(1719327024.541:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.567162 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:50:24.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.567295 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:50:24.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.581183 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:50:24.581275 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:50:24.594016 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 14:50:24.594117 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 14:50:24.643218 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:50:24.668298 iscsid[849]: iscsid shutting down. Jun 25 14:50:24.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.649121 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 14:50:24.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.697097 ignition[1085]: INFO : Ignition 2.15.0 Jun 25 14:50:24.697097 ignition[1085]: INFO : Stage: umount Jun 25 14:50:24.697097 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:50:24.697097 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:50:24.697097 ignition[1085]: INFO : umount: umount passed Jun 25 14:50:24.697097 ignition[1085]: INFO : Ignition finished successfully Jun 25 14:50:24.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.657235 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:50:24.671692 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:50:24.671897 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:50:24.679393 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:50:24.679509 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:50:24.693297 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 14:50:24.693433 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 14:50:24.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.703442 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:50:24.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.703545 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:50:24.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.714633 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:50:24.714733 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:50:24.732194 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:50:24.732314 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:50:24.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.738818 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 14:50:24.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.738866 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 14:50:24.961000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:50:24.754230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:50:24.754285 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:50:24.766371 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:50:24.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:25.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.777734 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:50:25.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.790308 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:50:25.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.797422 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:50:24.809917 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:50:24.822854 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:50:24.822910 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:50:25.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.837048 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:50:24.837098 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:50:25.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.848414 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:50:25.140421 kernel: hv_netvsc 0022487d-d18e-0022-487d-d18e0022487d eth0: Data path switched from VF: enP62958s1 Jun 25 14:50:25.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.856247 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:50:25.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.856865 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 14:50:24.856970 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:50:24.867009 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:50:24.867111 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:50:25.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.880993 systemd[1]: Stopped target network.target - Network. Jun 25 14:50:25.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.891266 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:50:25.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.891310 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:50:25.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:25.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.903218 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:50:24.916157 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:50:24.916970 systemd-networkd[844]: eth0: DHCPv6 lease lost Jun 25 14:50:25.247000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:50:24.924234 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:50:25.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:24.924358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:50:24.937016 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:50:24.937108 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:50:24.949773 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:50:24.949817 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:50:24.976356 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:50:24.982506 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:50:24.982602 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:50:24.998443 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:50:24.998495 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:50:25.010310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:50:25.010364 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:50:25.017725 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:50:25.017777 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:50:25.033129 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:50:25.042659 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:50:25.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:25.042751 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:50:25.065254 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:50:25.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:25.065468 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:50:25.079489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:50:25.079533 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:50:25.091685 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:50:25.091735 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:50:25.103824 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:50:25.103877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:50:25.110428 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:50:25.486379 systemd-journald[208]: Received SIGTERM from PID 1 (n/a). Jun 25 14:50:25.110468 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:50:25.134298 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:50:25.134351 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:50:25.168072 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:50:25.174341 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 14:50:25.174439 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:50:25.195563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:50:25.195620 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:50:25.202616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:50:25.202668 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:50:25.216807 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 14:50:25.217355 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:50:25.217460 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:50:25.248062 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:50:25.248175 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:50:25.371558 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:50:25.371682 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:50:25.384206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:50:25.398392 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:50:25.398461 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:50:25.438324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:50:25.449902 systemd[1]: Switching root. Jun 25 14:50:25.486990 systemd-journald[208]: Journal stopped Jun 25 14:50:29.717706 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:50:29.717728 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:50:29.717738 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:50:29.717748 kernel: SELinux: policy capability open_perms=1 Jun 25 14:50:29.717756 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:50:29.717765 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:50:29.717774 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:50:29.717782 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:50:29.717790 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:50:29.717798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:50:29.717809 systemd[1]: Successfully loaded SELinux policy in 302.640ms. Jun 25 14:50:29.717819 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.441ms. Jun 25 14:50:29.717829 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:50:29.717840 systemd[1]: Detected virtualization microsoft. Jun 25 14:50:29.717851 systemd[1]: Detected architecture arm64. Jun 25 14:50:29.717860 systemd[1]: Detected first boot. Jun 25 14:50:29.717870 systemd[1]: Hostname set to . Jun 25 14:50:29.717879 systemd[1]: Initializing machine ID from random generator. Jun 25 14:50:29.717887 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:50:29.717896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 14:50:29.717905 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 14:50:29.717914 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 14:50:29.717936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:50:29.717947 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:50:29.717957 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:50:29.717966 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:50:29.717976 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:50:29.717985 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:50:29.717994 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:50:29.718005 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:50:29.718015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:50:29.718024 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:50:29.718033 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:50:29.718043 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:50:29.718053 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 14:50:29.718062 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 14:50:29.718072 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 14:50:29.718083 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:50:29.718092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:50:29.718101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:50:29.718113 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:50:29.718123 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:50:29.718132 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:50:29.718142 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:50:29.718151 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:50:29.718162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:50:29.718171 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:50:29.718180 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:50:29.718190 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:50:29.718199 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:50:29.718209 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:50:29.718220 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:50:29.718229 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:50:29.718239 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:50:29.718249 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:50:29.718259 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:50:29.718269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:50:29.718279 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:50:29.718290 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:50:29.718300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:50:29.718309 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:50:29.718319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:50:29.718328 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:50:29.718338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:50:29.718347 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:50:29.718357 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 14:50:29.718367 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 14:50:29.718377 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 14:50:29.718386 kernel: audit: type=1131 audit(1719327029.491:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.718395 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 14:50:29.718404 kernel: loop: module loaded Jun 25 14:50:29.718413 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 14:50:29.718422 kernel: fuse: init (API version 7.37) Jun 25 14:50:29.718431 kernel: audit: type=1131 audit(1719327029.534:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.718441 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 14:50:29.718451 kernel: audit: type=1130 audit(1719327029.562:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.718461 systemd[1]: systemd-journald.service: Consumed 3.655s CPU time. Jun 25 14:50:29.718471 kernel: audit: type=1131 audit(1719327029.562:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.718479 kernel: audit: type=1334 audit(1719327029.586:110): prog-id=18 op=LOAD Jun 25 14:50:29.718488 kernel: audit: type=1334 audit(1719327029.586:111): prog-id=19 op=LOAD Jun 25 14:50:29.718497 kernel: ACPI: bus type drm_connector registered Jun 25 14:50:29.718507 kernel: audit: type=1334 audit(1719327029.586:112): prog-id=20 op=LOAD Jun 25 14:50:29.718516 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:50:29.718525 kernel: audit: type=1334 audit(1719327029.586:113): prog-id=16 op=UNLOAD Jun 25 14:50:29.718534 kernel: audit: type=1334 audit(1719327029.587:114): prog-id=17 op=UNLOAD Jun 25 14:50:29.718543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:50:29.718552 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:50:29.718561 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:50:29.718571 kernel: audit: type=1305 audit(1719327029.706:115): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:50:29.718584 systemd-journald[1217]: Journal started Jun 25 14:50:29.718622 systemd-journald[1217]: Runtime Journal (/run/log/journal/677e0525a9564c768881c8f98086fc85) is 8.0M, max 78.6M, 70.6M free. Jun 25 14:50:26.558000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 14:50:27.036000 audit: BPF prog-id=10 op=LOAD Jun 25 14:50:27.036000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:50:27.036000 audit: BPF prog-id=11 op=LOAD Jun 25 14:50:27.036000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:50:28.815000 audit: BPF prog-id=12 op=LOAD Jun 25 14:50:28.815000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:50:28.815000 audit: BPF prog-id=13 op=LOAD Jun 25 14:50:28.815000 audit: BPF prog-id=14 op=LOAD Jun 25 14:50:28.815000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:50:28.815000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:50:28.816000 audit: BPF prog-id=15 op=LOAD Jun 25 14:50:28.816000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:50:28.817000 audit: BPF prog-id=16 op=LOAD Jun 25 14:50:28.817000 audit: BPF prog-id=17 op=LOAD Jun 25 14:50:28.817000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:50:28.817000 audit: BPF prog-id=14 op=UNLOAD Jun 25 14:50:28.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:28.826000 audit: BPF prog-id=15 op=UNLOAD Jun 25 14:50:28.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:28.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.586000 audit: BPF prog-id=18 op=LOAD Jun 25 14:50:29.586000 audit: BPF prog-id=19 op=LOAD Jun 25 14:50:29.586000 audit: BPF prog-id=20 op=LOAD Jun 25 14:50:29.586000 audit: BPF prog-id=16 op=UNLOAD Jun 25 14:50:29.587000 audit: BPF prog-id=17 op=UNLOAD Jun 25 14:50:29.706000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:50:28.809200 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:50:28.809212 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 14:50:28.818486 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 14:50:28.818832 systemd[1]: systemd-journald.service: Consumed 3.655s CPU time. Jun 25 14:50:29.706000 audit[1217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd1432940 a2=4000 a3=1 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:50:29.706000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:50:29.753371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:50:29.763107 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 14:50:29.763183 systemd[1]: Stopped verity-setup.service. Jun 25 14:50:29.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.781893 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:50:29.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.782830 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:50:29.788813 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:50:29.795475 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:50:29.805562 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:50:29.812385 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:50:29.818828 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:50:29.824590 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:50:29.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.831538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:50:29.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.838963 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:50:29.839116 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:50:29.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.846887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:50:29.847066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:50:29.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.854052 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:50:29.854224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:50:29.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.860973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:50:29.861131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:50:29.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.868681 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:50:29.868855 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:50:29.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.876502 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:50:29.876667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:50:29.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.884101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:50:29.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.892254 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:50:29.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.899809 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:50:29.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.907617 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:50:29.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.915272 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:50:29.929115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:50:29.937103 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:50:29.943310 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:50:29.945256 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:50:29.953213 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:50:29.959491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:50:29.961133 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:50:29.967741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:50:29.969343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:50:29.977296 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:50:29.988114 systemd-journald[1217]: Time spent on flushing to /var/log/journal/677e0525a9564c768881c8f98086fc85 is 18.291ms for 1046 entries. Jun 25 14:50:29.988114 systemd-journald[1217]: System Journal (/var/log/journal/677e0525a9564c768881c8f98086fc85) is 8.0M, max 2.6G, 2.6G free. Jun 25 14:50:30.038595 systemd-journald[1217]: Received client request to flush runtime journal. Jun 25 14:50:30.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:29.990550 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:50:30.007230 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:50:30.014597 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:50:30.022241 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:50:30.031144 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:50:30.038893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:50:30.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:30.046742 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:50:30.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:30.054982 udevadm[1231]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 14:50:30.101434 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:50:30.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:30.113394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:50:30.256566 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:50:30.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:31.014791 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:50:31.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:31.022000 audit: BPF prog-id=21 op=LOAD Jun 25 14:50:31.022000 audit: BPF prog-id=22 op=LOAD Jun 25 14:50:31.022000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:50:31.022000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:50:31.025288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:50:31.054702 systemd-udevd[1236]: Using default interface naming scheme 'v252'. Jun 25 14:50:31.212385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:50:31.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:31.228000 audit: BPF prog-id=23 op=LOAD Jun 25 14:50:31.236114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:50:31.264443 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 14:50:31.273997 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1251) Jun 25 14:50:31.284000 audit: BPF prog-id=24 op=LOAD Jun 25 14:50:31.284000 audit: BPF prog-id=25 op=LOAD Jun 25 14:50:31.284000 audit: BPF prog-id=26 op=LOAD Jun 25 14:50:31.291140 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:50:31.341728 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:50:31.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:31.379973 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 14:50:31.412953 kernel: hv_vmbus: registering driver hv_balloon Jun 25 14:50:31.413032 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 14:50:31.421878 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 25 14:50:31.434563 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 14:50:31.434678 kernel: hv_vmbus: registering driver hv_utils Jun 25 14:50:31.813125 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 14:50:31.813202 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 14:50:31.813218 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 14:50:31.842256 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1242) Jun 25 14:50:31.842348 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 14:50:31.843263 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 14:50:31.854171 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 14:50:31.861481 kernel: Console: switching to colour dummy device 80x25 Jun 25 14:50:31.866652 systemd-networkd[1257]: lo: Link UP Jun 25 14:50:31.866955 systemd-networkd[1257]: lo: Gained carrier Jun 25 14:50:31.867514 systemd-networkd[1257]: Enumeration completed Jun 25 14:50:31.867731 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:50:31.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:31.880243 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 14:50:31.875828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 14:50:31.880801 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:50:31.880808 systemd-networkd[1257]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:50:31.894476 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:50:31.943252 kernel: mlx5_core f5ee:00:02.0 enP62958s1: Link up Jun 25 14:50:31.969508 kernel: hv_netvsc 0022487d-d18e-0022-487d-d18e0022487d eth0: Data path switched to VF: enP62958s1 Jun 25 14:50:31.970137 systemd-networkd[1257]: enP62958s1: Link UP Jun 25 14:50:31.970246 systemd-networkd[1257]: eth0: Link UP Jun 25 14:50:31.970249 systemd-networkd[1257]: eth0: Gained carrier Jun 25 14:50:31.970264 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:50:31.974549 systemd-networkd[1257]: enP62958s1: Gained carrier Jun 25 14:50:31.985361 systemd-networkd[1257]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:50:32.037785 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:50:32.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:32.049634 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:50:32.124541 lvm[1317]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:50:32.149301 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:50:32.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:32.155512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:50:32.168438 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:50:32.172294 lvm[1318]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:50:32.197214 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:50:32.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:32.203643 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:50:32.210118 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:50:32.210148 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:50:32.216392 systemd[1]: Reached target machines.target - Containers. Jun 25 14:50:32.228454 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:50:32.234005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:50:32.234091 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:50:32.235694 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:50:32.242924 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:50:32.250268 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:50:32.257549 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:50:32.286401 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1320 (bootctl) Jun 25 14:50:32.295145 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:50:32.302147 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:50:32.303053 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:50:32.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:32.309895 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:50:32.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:32.335262 kernel: loop0: detected capacity change from 0 to 194512 Jun 25 14:50:32.397273 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:50:32.428259 kernel: loop1: detected capacity change from 0 to 113264 Jun 25 14:50:32.455176 systemd-fsck[1327]: fsck.fat 4.2 (2021-01-31) Jun 25 14:50:32.455176 systemd-fsck[1327]: /dev/sda1: 242 files, 114659/258078 clusters Jun 25 14:50:32.457349 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:50:32.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:32.473743 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:50:32.483560 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:50:32.495108 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:50:32.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:32.832254 kernel: loop2: detected capacity change from 0 to 59648 Jun 25 14:50:33.133260 kernel: loop3: detected capacity change from 0 to 55744 Jun 25 14:50:33.524262 kernel: loop4: detected capacity change from 0 to 194512 Jun 25 14:50:33.534254 kernel: loop5: detected capacity change from 0 to 113264 Jun 25 14:50:33.543257 kernel: loop6: detected capacity change from 0 to 59648 Jun 25 14:50:33.553259 kernel: loop7: detected capacity change from 0 to 55744 Jun 25 14:50:33.556338 (sd-sysext)[1337]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 14:50:33.557810 (sd-sysext)[1337]: Merged extensions into '/usr'. Jun 25 14:50:33.559325 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:50:33.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:33.576423 systemd[1]: Starting ensure-sysext.service... Jun 25 14:50:33.581885 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:50:33.598090 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:50:33.599383 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:50:33.599780 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:50:33.600540 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:50:33.627104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:50:33.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:33.638497 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:50:33.648093 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:50:33.655411 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:50:33.662000 audit: BPF prog-id=27 op=LOAD Jun 25 14:50:33.663794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:50:33.670000 audit: BPF prog-id=28 op=LOAD Jun 25 14:50:33.671840 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:50:33.677171 systemd-networkd[1257]: eth0: Gained IPv6LL Jun 25 14:50:33.680913 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:50:33.687154 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:50:33.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:50:33.701000 audit[1348]: SYSTEM_BOOT pid=1348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:50:33.698920 systemd[1]: Reloading. Jun 25 14:50:33.817535 systemd-resolved[1346]: Positive Trust Anchors: Jun 25 14:50:33.817843 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:50:33.817927 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:50:33.851715 systemd-resolved[1346]: Using system hostname 'ci-3815.2.4-a-39232a46a6'. Jun 25 14:50:33.872053 augenrules[1401]: No rules Jun 25 14:50:33.871000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:50:33.871000 audit[1401]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb6dd650 a2=420 a3=0 items=0 ppid=1342 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:50:33.871000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:50:33.924013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:50:34.010638 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:50:34.017302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:50:34.033185 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:50:34.040028 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:50:34.053747 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:50:34.062992 systemd[1]: Reached target network.target - Network. Jun 25 14:50:34.068297 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:50:34.076407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:50:34.083108 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:50:34.088846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:50:34.095671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:50:34.102744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:50:34.110436 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:50:34.112204 systemd-timesyncd[1347]: Contacted time server 12.167.151.1:123 (0.flatcar.pool.ntp.org). Jun 25 14:50:34.112291 systemd-timesyncd[1347]: Initial clock synchronization to Tue 2024-06-25 14:50:34.124607 UTC. Jun 25 14:50:34.116495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:50:34.116654 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:50:34.117810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:50:34.117983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:50:34.125196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:50:34.127552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:50:34.134882 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:50:34.135027 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:50:34.143469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:50:34.147639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:50:34.154738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:50:34.162412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:50:34.167997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:50:34.168162 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:50:34.169066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:50:34.169304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:50:34.176099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:50:34.176378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:50:34.183033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:50:34.183179 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:50:34.190002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:50:34.190121 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:50:34.192378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:50:34.199762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:50:34.206919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:50:34.213858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:50:34.221009 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:50:34.226643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:50:34.226799 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:50:34.227808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:50:34.227980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:50:34.234630 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:50:34.234786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:50:34.241114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:50:34.241287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:50:34.248064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:50:34.248198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:50:34.255004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:50:34.255077 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:50:34.256355 systemd[1]: Finished ensure-sysext.service. Jun 25 14:50:34.312524 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:50:34.318952 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:50:36.921932 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:50:36.932278 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:50:36.942591 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:50:36.955609 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:50:36.961871 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:50:36.967798 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:50:36.974034 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:50:36.980505 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:50:36.986568 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:50:36.993167 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:50:36.999557 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:50:36.999594 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:50:37.004476 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:50:37.010516 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:50:37.018270 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:50:37.029042 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:50:37.034851 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:50:37.035404 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:50:37.041393 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:50:37.046630 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:50:37.051671 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:50:37.051701 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:50:37.059384 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:50:37.067133 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 14:50:37.074282 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:50:37.081463 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:50:37.088650 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:50:37.089920 jq[1459]: false Jun 25 14:50:37.094317 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:50:37.123387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:50:37.132133 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:50:37.139849 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:50:37.146046 extend-filesystems[1460]: Found loop4 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found loop5 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found loop6 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found loop7 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda1 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda2 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda3 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found usr Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda4 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda6 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda7 Jun 25 14:50:37.146046 extend-filesystems[1460]: Found sda9 Jun 25 14:50:37.146046 extend-filesystems[1460]: Checking size of /dev/sda9 Jun 25 14:50:37.289508 dbus-daemon[1458]: [system] SELinux support is enabled Jun 25 14:50:37.153311 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:50:37.291696 extend-filesystems[1460]: Old size kept for /dev/sda9 Jun 25 14:50:37.291696 extend-filesystems[1460]: Found sr0 Jun 25 14:50:37.375754 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1500) Jun 25 14:50:37.180945 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:50:37.190511 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:50:37.217351 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:50:37.223198 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:50:37.376349 update_engine[1484]: I0625 14:50:37.317482 1484 main.cc:92] Flatcar Update Engine starting Jun 25 14:50:37.376349 update_engine[1484]: I0625 14:50:37.321403 1484 update_check_scheduler.cc:74] Next update check in 4m5s Jun 25 14:50:37.223321 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:50:37.376650 jq[1487]: true Jun 25 14:50:37.227641 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:50:37.229097 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:50:37.246416 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:50:37.261876 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:50:37.262073 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:50:37.262462 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:50:37.262614 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:50:37.270787 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:50:37.271000 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:50:37.285098 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:50:37.322529 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:50:37.366002 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:50:37.366203 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:50:37.386947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:50:37.394978 jq[1519]: true Jun 25 14:50:37.386976 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:50:37.397028 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:50:37.397052 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:50:37.407095 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 14:50:37.408885 systemd-logind[1480]: New seat seat0. Jun 25 14:50:37.413807 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:50:37.424061 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:50:37.441706 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:50:37.482942 tar[1512]: linux-arm64/helm Jun 25 14:50:37.504815 coreos-metadata[1455]: Jun 25 14:50:37.500 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 14:50:37.512350 coreos-metadata[1455]: Jun 25 14:50:37.512 INFO Fetch successful Jun 25 14:50:37.512350 coreos-metadata[1455]: Jun 25 14:50:37.512 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 14:50:37.518005 coreos-metadata[1455]: Jun 25 14:50:37.517 INFO Fetch successful Jun 25 14:50:37.518005 coreos-metadata[1455]: Jun 25 14:50:37.517 INFO Fetching http://168.63.129.16/machine/ae638915-ebb0-47a3-9450-fb3436ff474f/f4ba1c74%2Dc819%2D4ab0%2Da1bd%2Dd81a7f7d8348.%5Fci%2D3815.2.4%2Da%2D39232a46a6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 14:50:37.520275 coreos-metadata[1455]: Jun 25 14:50:37.520 INFO Fetch successful Jun 25 14:50:37.520275 coreos-metadata[1455]: Jun 25 14:50:37.520 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 14:50:37.538273 coreos-metadata[1455]: Jun 25 14:50:37.536 INFO Fetch successful Jun 25 14:50:37.593194 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 14:50:37.604310 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:50:37.636832 bash[1548]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:50:37.637800 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:50:37.647112 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 14:50:37.703149 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:50:38.063535 containerd[1520]: time="2024-06-25T14:50:38.063406013Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:50:38.139716 containerd[1520]: time="2024-06-25T14:50:38.139669616Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:50:38.139894 containerd[1520]: time="2024-06-25T14:50:38.139878831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:50:38.152510 containerd[1520]: time="2024-06-25T14:50:38.152448301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:50:38.152653 containerd[1520]: time="2024-06-25T14:50:38.152638343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:50:38.152972 containerd[1520]: time="2024-06-25T14:50:38.152948703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:50:38.153281 containerd[1520]: time="2024-06-25T14:50:38.153263346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:50:38.153444 containerd[1520]: time="2024-06-25T14:50:38.153428413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:50:38.154085 containerd[1520]: time="2024-06-25T14:50:38.154065344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:50:38.154159 containerd[1520]: time="2024-06-25T14:50:38.154145836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:50:38.154317 containerd[1520]: time="2024-06-25T14:50:38.154300335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:50:38.154635 containerd[1520]: time="2024-06-25T14:50:38.154601610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:50:38.155146 containerd[1520]: time="2024-06-25T14:50:38.155125588Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:50:38.158317 containerd[1520]: time="2024-06-25T14:50:38.158281584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:50:38.159567 containerd[1520]: time="2024-06-25T14:50:38.159534032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:50:38.159682 containerd[1520]: time="2024-06-25T14:50:38.159667999Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:50:38.159852 containerd[1520]: time="2024-06-25T14:50:38.159834066Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:50:38.159929 containerd[1520]: time="2024-06-25T14:50:38.159915878Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172364710Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172420306Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172434715Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172481625Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172509483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172521972Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172613751Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172784781Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172801992Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172816041Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172834373Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172848702Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172867875Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.174609 containerd[1520]: time="2024-06-25T14:50:38.172881684Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.172895412Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.172909582Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.172923391Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.172936839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.172949928Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173040306Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173333215Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173363114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173377123Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173402700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173466941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173479990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173493398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175000 containerd[1520]: time="2024-06-25T14:50:38.173505686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173570328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173588099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173601708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173613916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173627565Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173768336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173786587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173799596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173814806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173828054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173842664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173855792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.175920 containerd[1520]: time="2024-06-25T14:50:38.173866519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:50:38.176195 containerd[1520]: time="2024-06-25T14:50:38.174120643Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:50:38.176195 containerd[1520]: time="2024-06-25T14:50:38.174178080Z" level=info msg="Connect containerd service" Jun 25 14:50:38.176195 containerd[1520]: time="2024-06-25T14:50:38.174212262Z" level=info msg="using legacy CRI server" Jun 25 14:50:38.176195 containerd[1520]: time="2024-06-25T14:50:38.174219387Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:50:38.176195 containerd[1520]: time="2024-06-25T14:50:38.174264496Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:50:38.176195 containerd[1520]: time="2024-06-25T14:50:38.175677367Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:50:38.177037 containerd[1520]: time="2024-06-25T14:50:38.176994857Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:50:38.177210 containerd[1520]: time="2024-06-25T14:50:38.177192345Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:50:38.177325 containerd[1520]: time="2024-06-25T14:50:38.177311021Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:50:38.177404 containerd[1520]: time="2024-06-25T14:50:38.177391393Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:50:38.177558 containerd[1520]: time="2024-06-25T14:50:38.177153199Z" level=info msg="Start subscribing containerd event" Jun 25 14:50:38.178214 containerd[1520]: time="2024-06-25T14:50:38.178107735Z" level=info msg="Start recovering state" Jun 25 14:50:38.178451 containerd[1520]: time="2024-06-25T14:50:38.178433545Z" level=info msg="Start event monitor" Jun 25 14:50:38.178533 containerd[1520]: time="2024-06-25T14:50:38.178514678Z" level=info msg="Start snapshots syncer" Jun 25 14:50:38.178587 containerd[1520]: time="2024-06-25T14:50:38.178575757Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:50:38.178642 containerd[1520]: time="2024-06-25T14:50:38.178625589Z" level=info msg="Start streaming server" Jun 25 14:50:38.178757 containerd[1520]: time="2024-06-25T14:50:38.178596411Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:50:38.178802 containerd[1520]: time="2024-06-25T14:50:38.178778888Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:50:38.178922 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:50:38.185760 containerd[1520]: time="2024-06-25T14:50:38.185709720Z" level=info msg="containerd successfully booted in 0.124294s" Jun 25 14:50:38.244062 tar[1512]: linux-arm64/LICENSE Jun 25 14:50:38.244062 tar[1512]: linux-arm64/README.md Jun 25 14:50:38.260535 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:50:38.335314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:50:38.807149 kubelet[1576]: E0625 14:50:38.807018 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:50:38.809987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:50:38.810119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:50:40.124012 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:50:40.143163 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:50:40.156762 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:50:40.164070 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 14:50:40.170666 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:50:40.170862 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:50:40.187808 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:50:40.195700 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 14:50:40.202676 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:50:40.212848 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:50:40.220120 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 14:50:40.226829 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:50:40.232336 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:50:40.246021 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:50:40.254947 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:50:40.255119 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:50:40.261842 systemd[1]: Startup finished in 694ms (kernel) + 12.507s (initrd) + 13.640s (userspace) = 26.842s. Jun 25 14:50:40.640876 login[1600]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jun 25 14:50:40.642726 login[1601]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 14:50:40.649947 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:50:40.657561 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:50:40.661879 systemd-logind[1480]: New session 2 of user core. Jun 25 14:50:40.668125 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:50:40.675846 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:50:40.693079 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:50:40.790510 systemd[1604]: Queued start job for default target default.target. Jun 25 14:50:40.796644 systemd[1604]: Reached target paths.target - Paths. Jun 25 14:50:40.796819 systemd[1604]: Reached target sockets.target - Sockets. Jun 25 14:50:40.796894 systemd[1604]: Reached target timers.target - Timers. Jun 25 14:50:40.796961 systemd[1604]: Reached target basic.target - Basic System. Jun 25 14:50:40.797078 systemd[1604]: Reached target default.target - Main User Target. Jun 25 14:50:40.797151 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:50:40.797278 systemd[1604]: Startup finished in 97ms. Jun 25 14:50:40.798319 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:50:41.575021 waagent[1599]: 2024-06-25T14:50:41.574922Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 14:50:41.581559 waagent[1599]: 2024-06-25T14:50:41.581466Z INFO Daemon Daemon OS: flatcar 3815.2.4 Jun 25 14:50:41.586555 waagent[1599]: 2024-06-25T14:50:41.586472Z INFO Daemon Daemon Python: 3.11.6 Jun 25 14:50:41.591613 waagent[1599]: 2024-06-25T14:50:41.591489Z INFO Daemon Daemon Run daemon Jun 25 14:50:41.596386 waagent[1599]: 2024-06-25T14:50:41.596323Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3815.2.4' Jun 25 14:50:41.605913 waagent[1599]: 2024-06-25T14:50:41.605824Z INFO Daemon Daemon Using waagent for provisioning Jun 25 14:50:41.612109 waagent[1599]: 2024-06-25T14:50:41.612049Z INFO Daemon Daemon Activate resource disk Jun 25 14:50:41.617534 waagent[1599]: 2024-06-25T14:50:41.617461Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 14:50:41.630226 waagent[1599]: 2024-06-25T14:50:41.630150Z INFO Daemon Daemon Found device: None Jun 25 14:50:41.635417 waagent[1599]: 2024-06-25T14:50:41.635342Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 14:50:41.641570 login[1600]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 14:50:41.645732 waagent[1599]: 2024-06-25T14:50:41.645649Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 14:50:41.650420 systemd-logind[1480]: New session 1 of user core. Jun 25 14:50:41.658559 waagent[1599]: 2024-06-25T14:50:41.658496Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 14:50:41.665126 waagent[1599]: 2024-06-25T14:50:41.665047Z INFO Daemon Daemon Running default provisioning handler Jun 25 14:50:41.671462 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:50:41.692201 waagent[1599]: 2024-06-25T14:50:41.692094Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jun 25 14:50:41.708872 waagent[1599]: 2024-06-25T14:50:41.708789Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 14:50:41.724819 waagent[1599]: 2024-06-25T14:50:41.724720Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 14:50:41.730647 waagent[1599]: 2024-06-25T14:50:41.730555Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 14:50:41.827250 waagent[1599]: 2024-06-25T14:50:41.827096Z INFO Daemon Daemon Successfully mounted dvd Jun 25 14:50:41.876393 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 14:50:41.892684 waagent[1599]: 2024-06-25T14:50:41.892598Z INFO Daemon Daemon Detect protocol endpoint Jun 25 14:50:41.898394 waagent[1599]: 2024-06-25T14:50:41.898313Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 14:50:41.904686 waagent[1599]: 2024-06-25T14:50:41.904614Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 14:50:41.911560 waagent[1599]: 2024-06-25T14:50:41.911486Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 14:50:41.917254 waagent[1599]: 2024-06-25T14:50:41.917171Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 14:50:41.922431 waagent[1599]: 2024-06-25T14:50:41.922366Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 14:50:41.953956 waagent[1599]: 2024-06-25T14:50:41.953898Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 14:50:41.961453 waagent[1599]: 2024-06-25T14:50:41.961419Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 14:50:41.967440 waagent[1599]: 2024-06-25T14:50:41.967363Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 14:50:42.534665 waagent[1599]: 2024-06-25T14:50:42.534572Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 14:50:42.541561 waagent[1599]: 2024-06-25T14:50:42.541474Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 14:50:42.550963 waagent[1599]: 2024-06-25T14:50:42.550904Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 14:50:42.578902 waagent[1599]: 2024-06-25T14:50:42.576456Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 14:50:42.582868 waagent[1599]: 2024-06-25T14:50:42.582813Z INFO Daemon Jun 25 14:50:42.585814 waagent[1599]: 2024-06-25T14:50:42.585754Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 836f3a7a-70d4-4406-92ce-60e80d70d01f eTag: 17908364675231332008 source: Fabric] Jun 25 14:50:42.597744 waagent[1599]: 2024-06-25T14:50:42.597459Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 14:50:42.604637 waagent[1599]: 2024-06-25T14:50:42.604582Z INFO Daemon Jun 25 14:50:42.607602 waagent[1599]: 2024-06-25T14:50:42.607547Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 14:50:42.618995 waagent[1599]: 2024-06-25T14:50:42.618955Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 14:50:42.715277 waagent[1599]: 2024-06-25T14:50:42.715172Z INFO Daemon Downloaded certificate {'thumbprint': '828CC618A77A2F39B7BFA39CCD02BCB3C86FE97D', 'hasPrivateKey': False} Jun 25 14:50:42.725374 waagent[1599]: 2024-06-25T14:50:42.725320Z INFO Daemon Downloaded certificate {'thumbprint': '66D879D6DD26C79437AD8C753B8B4199AE7BF41B', 'hasPrivateKey': True} Jun 25 14:50:42.735444 waagent[1599]: 2024-06-25T14:50:42.735388Z INFO Daemon Fetch goal state completed Jun 25 14:50:42.747448 waagent[1599]: 2024-06-25T14:50:42.747397Z INFO Daemon Daemon Starting provisioning Jun 25 14:50:42.752542 waagent[1599]: 2024-06-25T14:50:42.752473Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 14:50:42.757304 waagent[1599]: 2024-06-25T14:50:42.757247Z INFO Daemon Daemon Set hostname [ci-3815.2.4-a-39232a46a6] Jun 25 14:50:43.304995 waagent[1599]: 2024-06-25T14:50:43.304913Z INFO Daemon Daemon Publish hostname [ci-3815.2.4-a-39232a46a6] Jun 25 14:50:43.311505 waagent[1599]: 2024-06-25T14:50:43.311426Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 14:50:43.317774 waagent[1599]: 2024-06-25T14:50:43.317709Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 14:50:43.371100 systemd-networkd[1257]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:50:43.371110 systemd-networkd[1257]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:50:43.371141 systemd-networkd[1257]: eth0: DHCP lease lost Jun 25 14:50:43.372534 waagent[1599]: 2024-06-25T14:50:43.372442Z INFO Daemon Daemon Create user account if not exists Jun 25 14:50:43.378096 waagent[1599]: 2024-06-25T14:50:43.378027Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 14:50:43.383939 waagent[1599]: 2024-06-25T14:50:43.383875Z INFO Daemon Daemon Configure sudoer Jun 25 14:50:43.388131 systemd-networkd[1257]: eth0: DHCPv6 lease lost Jun 25 14:50:43.389197 waagent[1599]: 2024-06-25T14:50:43.389091Z INFO Daemon Daemon Configure sshd Jun 25 14:50:43.393706 waagent[1599]: 2024-06-25T14:50:43.393637Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 14:50:43.407562 waagent[1599]: 2024-06-25T14:50:43.407475Z INFO Daemon Daemon Deploy ssh public key. Jun 25 14:50:43.430319 systemd-networkd[1257]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:50:44.638413 waagent[1599]: 2024-06-25T14:50:44.638361Z INFO Daemon Daemon Provisioning complete Jun 25 14:50:44.660105 waagent[1599]: 2024-06-25T14:50:44.660057Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 14:50:44.666873 waagent[1599]: 2024-06-25T14:50:44.666808Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 14:50:44.676797 waagent[1599]: 2024-06-25T14:50:44.676734Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 14:50:44.820215 waagent[1651]: 2024-06-25T14:50:44.820135Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 14:50:44.820741 waagent[1651]: 2024-06-25T14:50:44.820692Z INFO ExtHandler ExtHandler OS: flatcar 3815.2.4 Jun 25 14:50:44.820887 waagent[1651]: 2024-06-25T14:50:44.820855Z INFO ExtHandler ExtHandler Python: 3.11.6 Jun 25 14:50:44.884711 waagent[1651]: 2024-06-25T14:50:44.884622Z INFO ExtHandler ExtHandler Distro: flatcar-3815.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.6; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 14:50:44.885077 waagent[1651]: 2024-06-25T14:50:44.885036Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:50:44.885260 waagent[1651]: 2024-06-25T14:50:44.885206Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:50:44.892546 waagent[1651]: 2024-06-25T14:50:44.892411Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 14:50:44.901039 waagent[1651]: 2024-06-25T14:50:44.900988Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 14:50:44.901796 waagent[1651]: 2024-06-25T14:50:44.901752Z INFO ExtHandler Jun 25 14:50:44.901982 waagent[1651]: 2024-06-25T14:50:44.901948Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: cbc9c6b0-a6c6-409d-9327-112daecfd2c0 eTag: 17908364675231332008 source: Fabric] Jun 25 14:50:44.902417 waagent[1651]: 2024-06-25T14:50:44.902376Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 14:50:44.903123 waagent[1651]: 2024-06-25T14:50:44.903080Z INFO ExtHandler Jun 25 14:50:44.903316 waagent[1651]: 2024-06-25T14:50:44.903278Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 14:50:44.907356 waagent[1651]: 2024-06-25T14:50:44.907322Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 14:50:44.999792 waagent[1651]: 2024-06-25T14:50:44.999705Z INFO ExtHandler Downloaded certificate {'thumbprint': '828CC618A77A2F39B7BFA39CCD02BCB3C86FE97D', 'hasPrivateKey': False} Jun 25 14:50:45.000456 waagent[1651]: 2024-06-25T14:50:45.000407Z INFO ExtHandler Downloaded certificate {'thumbprint': '66D879D6DD26C79437AD8C753B8B4199AE7BF41B', 'hasPrivateKey': True} Jun 25 14:50:45.001067 waagent[1651]: 2024-06-25T14:50:45.000990Z INFO ExtHandler Fetch goal state completed Jun 25 14:50:45.019146 waagent[1651]: 2024-06-25T14:50:45.019070Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1651 Jun 25 14:50:45.019527 waagent[1651]: 2024-06-25T14:50:45.019485Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 14:50:45.021453 waagent[1651]: 2024-06-25T14:50:45.021397Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3815.2.4', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 14:50:45.021998 waagent[1651]: 2024-06-25T14:50:45.021958Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 14:50:45.087362 waagent[1651]: 2024-06-25T14:50:45.087324Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 14:50:45.087727 waagent[1651]: 2024-06-25T14:50:45.087678Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 14:50:45.094652 waagent[1651]: 2024-06-25T14:50:45.094609Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 14:50:45.102580 systemd[1]: Reloading. Jun 25 14:50:45.257922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:50:45.341278 waagent[1651]: 2024-06-25T14:50:45.339387Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 14:50:45.345339 systemd[1]: Reloading. Jun 25 14:50:45.494358 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:50:45.568344 waagent[1651]: 2024-06-25T14:50:45.567887Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 14:50:45.568344 waagent[1651]: 2024-06-25T14:50:45.568076Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 14:50:45.876786 waagent[1651]: 2024-06-25T14:50:45.876651Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 14:50:45.877807 waagent[1651]: 2024-06-25T14:50:45.877743Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 14:50:45.878854 waagent[1651]: 2024-06-25T14:50:45.878786Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 14:50:45.878968 waagent[1651]: 2024-06-25T14:50:45.878929Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:50:45.879069 waagent[1651]: 2024-06-25T14:50:45.879028Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:50:45.879374 waagent[1651]: 2024-06-25T14:50:45.879315Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 14:50:45.879951 waagent[1651]: 2024-06-25T14:50:45.879887Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 14:50:45.879951 waagent[1651]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 14:50:45.879951 waagent[1651]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 14:50:45.879951 waagent[1651]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 14:50:45.879951 waagent[1651]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:50:45.879951 waagent[1651]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:50:45.879951 waagent[1651]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:50:45.880579 waagent[1651]: 2024-06-25T14:50:45.880507Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 14:50:45.880678 waagent[1651]: 2024-06-25T14:50:45.880635Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:50:45.880864 waagent[1651]: 2024-06-25T14:50:45.880815Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:50:45.881424 waagent[1651]: 2024-06-25T14:50:45.881357Z INFO EnvHandler ExtHandler Configure routes Jun 25 14:50:45.881526 waagent[1651]: 2024-06-25T14:50:45.881489Z INFO EnvHandler ExtHandler Gateway:None Jun 25 14:50:45.881591 waagent[1651]: 2024-06-25T14:50:45.881553Z INFO EnvHandler ExtHandler Routes:None Jun 25 14:50:45.882019 waagent[1651]: 2024-06-25T14:50:45.881971Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 14:50:45.882140 waagent[1651]: 2024-06-25T14:50:45.882082Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 14:50:45.882644 waagent[1651]: 2024-06-25T14:50:45.882577Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 14:50:45.882774 waagent[1651]: 2024-06-25T14:50:45.882724Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 14:50:45.882939 waagent[1651]: 2024-06-25T14:50:45.882892Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 14:50:45.888929 waagent[1651]: 2024-06-25T14:50:45.888870Z INFO ExtHandler ExtHandler Jun 25 14:50:45.889496 waagent[1651]: 2024-06-25T14:50:45.889439Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: adf5a9c0-614b-45f1-b78e-bb5486967df3 correlation f62d2089-a0e0-49f9-833d-7ef9a972dfcd created: 2024-06-25T14:49:30.032806Z] Jun 25 14:50:45.890658 waagent[1651]: 2024-06-25T14:50:45.890609Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 14:50:45.891626 waagent[1651]: 2024-06-25T14:50:45.891586Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jun 25 14:50:45.934639 waagent[1651]: 2024-06-25T14:50:45.934555Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BA42865E-DB59-4AFE-87DB-60ADB0B9C1A5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 14:50:45.936609 waagent[1651]: 2024-06-25T14:50:45.936532Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 14:50:45.936609 waagent[1651]: Executing ['ip', '-a', '-o', 'link']: Jun 25 14:50:45.936609 waagent[1651]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 14:50:45.936609 waagent[1651]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:d1:8e brd ff:ff:ff:ff:ff:ff Jun 25 14:50:45.936609 waagent[1651]: 3: enP62958s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:d1:8e brd ff:ff:ff:ff:ff:ff\ altname enP62958p0s2 Jun 25 14:50:45.936609 waagent[1651]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 14:50:45.936609 waagent[1651]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 14:50:45.936609 waagent[1651]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 14:50:45.936609 waagent[1651]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 14:50:45.936609 waagent[1651]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jun 25 14:50:45.936609 waagent[1651]: 2: eth0 inet6 fe80::222:48ff:fe7d:d18e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 14:50:45.984457 waagent[1651]: 2024-06-25T14:50:45.984374Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 14:50:45.984457 waagent[1651]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:50:45.984457 waagent[1651]: pkts bytes target prot opt in out source destination Jun 25 14:50:45.984457 waagent[1651]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:50:45.984457 waagent[1651]: pkts bytes target prot opt in out source destination Jun 25 14:50:45.984457 waagent[1651]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:50:45.984457 waagent[1651]: pkts bytes target prot opt in out source destination Jun 25 14:50:45.984457 waagent[1651]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 14:50:45.984457 waagent[1651]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 14:50:45.984457 waagent[1651]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 14:50:45.988307 waagent[1651]: 2024-06-25T14:50:45.988211Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 14:50:45.988307 waagent[1651]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:50:45.988307 waagent[1651]: pkts bytes target prot opt in out source destination Jun 25 14:50:45.988307 waagent[1651]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:50:45.988307 waagent[1651]: pkts bytes target prot opt in out source destination Jun 25 14:50:45.988307 waagent[1651]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:50:45.988307 waagent[1651]: pkts bytes target prot opt in out source destination Jun 25 14:50:45.988307 waagent[1651]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 14:50:45.988307 waagent[1651]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 14:50:45.988307 waagent[1651]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 14:50:45.989067 waagent[1651]: 2024-06-25T14:50:45.989031Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 14:50:49.029260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:50:49.029445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:50:49.038675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:50:49.134570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:50:49.465739 kubelet[1851]: E0625 14:50:49.465610 1851 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:50:49.469277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:50:49.469414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:50:59.529321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:50:59.529506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:50:59.537611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:50:59.633839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:50:59.946685 kubelet[1861]: E0625 14:50:59.946560 1861 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:50:59.949766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:50:59.949909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:51:10.029360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 14:51:10.029544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:10.037588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:51:10.135983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:10.468032 kubelet[1871]: E0625 14:51:10.467905 1871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:51:10.471389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:51:10.471543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:51:19.936836 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 25 14:51:20.529371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 14:51:20.529549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:20.537620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:51:20.705422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:20.902667 kubelet[1882]: E0625 14:51:20.902545 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:51:20.905512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:51:20.905648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:51:22.424375 update_engine[1484]: I0625 14:51:22.424330 1484 update_attempter.cc:509] Updating boot flags... Jun 25 14:51:22.646275 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1901) Jun 25 14:51:31.029263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 14:51:31.029447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:31.037657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:51:31.277456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:31.393510 kubelet[1932]: E0625 14:51:31.393387 1932 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:51:31.395582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:51:31.395724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:51:36.806407 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:51:36.808101 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:46090.service - OpenSSH per-connection server daemon (10.200.16.10:46090). Jun 25 14:51:37.322871 sshd[1942]: Accepted publickey for core from 10.200.16.10 port 46090 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:51:37.324498 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:51:37.329850 systemd-logind[1480]: New session 3 of user core. Jun 25 14:51:37.335449 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:51:37.752351 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:46102.service - OpenSSH per-connection server daemon (10.200.16.10:46102). Jun 25 14:51:38.199178 sshd[1947]: Accepted publickey for core from 10.200.16.10 port 46102 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:51:38.200960 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:51:38.204981 systemd-logind[1480]: New session 4 of user core. Jun 25 14:51:38.214453 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:51:38.545765 sshd[1947]: pam_unix(sshd:session): session closed for user core Jun 25 14:51:38.548437 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:46102.service: Deactivated successfully. Jun 25 14:51:38.549209 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:51:38.549799 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:51:38.550845 systemd-logind[1480]: Removed session 4. Jun 25 14:51:38.629943 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:46116.service - OpenSSH per-connection server daemon (10.200.16.10:46116). Jun 25 14:51:39.083574 sshd[1953]: Accepted publickey for core from 10.200.16.10 port 46116 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:51:39.085677 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:51:39.089856 systemd-logind[1480]: New session 5 of user core. Jun 25 14:51:39.096491 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:51:39.427960 sshd[1953]: pam_unix(sshd:session): session closed for user core Jun 25 14:51:39.430752 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:46116.service: Deactivated successfully. Jun 25 14:51:39.431504 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:51:39.432078 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:51:39.432981 systemd-logind[1480]: Removed session 5. Jun 25 14:51:39.533959 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:46128.service - OpenSSH per-connection server daemon (10.200.16.10:46128). Jun 25 14:51:40.020604 sshd[1959]: Accepted publickey for core from 10.200.16.10 port 46128 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:51:40.022387 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:51:40.026312 systemd-logind[1480]: New session 6 of user core. Jun 25 14:51:40.032431 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:51:40.386738 sshd[1959]: pam_unix(sshd:session): session closed for user core Jun 25 14:51:40.389478 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:46128.service: Deactivated successfully. Jun 25 14:51:40.390207 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:51:40.390805 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:51:40.391763 systemd-logind[1480]: Removed session 6. Jun 25 14:51:40.474072 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:46144.service - OpenSSH per-connection server daemon (10.200.16.10:46144). Jun 25 14:51:40.927106 sshd[1965]: Accepted publickey for core from 10.200.16.10 port 46144 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:51:40.928592 sshd[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:51:40.932895 systemd-logind[1480]: New session 7 of user core. Jun 25 14:51:40.943437 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:51:41.424300 sudo[1968]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:51:41.424895 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:51:41.425825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 14:51:41.425999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:41.433566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:51:41.464566 sudo[1968]: pam_unix(sudo:session): session closed for user root Jun 25 14:51:41.554363 sshd[1965]: pam_unix(sshd:session): session closed for user core Jun 25 14:51:41.557612 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:46144.service: Deactivated successfully. Jun 25 14:51:41.558526 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:51:41.559186 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:51:41.560114 systemd-logind[1480]: Removed session 7. Jun 25 14:51:41.639320 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:46154.service - OpenSSH per-connection server daemon (10.200.16.10:46154). Jun 25 14:51:41.682588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:41.726120 kubelet[1978]: E0625 14:51:41.726064 1978 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:51:41.728860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:51:41.728997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:51:42.091332 sshd[1974]: Accepted publickey for core from 10.200.16.10 port 46154 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:51:42.093073 sshd[1974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:51:42.098332 systemd-logind[1480]: New session 8 of user core. Jun 25 14:51:42.104450 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:51:42.349126 sudo[1986]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:51:42.349917 sudo[1986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:51:42.353687 sudo[1986]: pam_unix(sudo:session): session closed for user root Jun 25 14:51:42.359204 sudo[1985]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:51:42.359505 sudo[1985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:51:42.377624 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:51:42.377000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:51:42.382952 kernel: kauditd_printk_skb: 55 callbacks suppressed Jun 25 14:51:42.383011 kernel: audit: type=1305 audit(1719327102.377:167): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:51:42.383307 auditctl[1989]: No rules Jun 25 14:51:42.394454 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:51:42.394644 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:51:42.377000 audit[1989]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffce1f1a70 a2=420 a3=0 items=0 ppid=1 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:42.418318 kernel: audit: type=1300 audit(1719327102.377:167): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffce1f1a70 a2=420 a3=0 items=0 ppid=1 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:42.396899 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:51:42.377000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:51:42.426268 kernel: audit: type=1327 audit(1719327102.377:167): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:51:42.426384 kernel: audit: type=1131 audit(1719327102.393:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.451249 augenrules[2006]: No rules Jun 25 14:51:42.452421 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:51:42.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.453450 sudo[1985]: pam_unix(sudo:session): session closed for user root Jun 25 14:51:42.452000 audit[1985]: USER_END pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.488353 kernel: audit: type=1130 audit(1719327102.451:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.488437 kernel: audit: type=1106 audit(1719327102.452:170): pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.488465 kernel: audit: type=1104 audit(1719327102.452:171): pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.452000 audit[1985]: CRED_DISP pid=1985 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.558453 sshd[1974]: pam_unix(sshd:session): session closed for user core Jun 25 14:51:42.558000 audit[1974]: USER_END pid=1974 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:42.562923 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:51:42.563990 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:46154.service: Deactivated successfully. Jun 25 14:51:42.564821 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:51:42.566307 systemd-logind[1480]: Removed session 8. Jun 25 14:51:42.558000 audit[1974]: CRED_DISP pid=1974 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:42.602720 kernel: audit: type=1106 audit(1719327102.558:172): pid=1974 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:42.602886 kernel: audit: type=1104 audit(1719327102.558:173): pid=1974 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:42.602925 kernel: audit: type=1131 audit(1719327102.559:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.36:22-10.200.16.10:46154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.36:22-10.200.16.10:46154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:42.648859 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:46158.service - OpenSSH per-connection server daemon (10.200.16.10:46158). Jun 25 14:51:42.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:46158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:43.134000 audit[2012]: USER_ACCT pid=2012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:43.135671 sshd[2012]: Accepted publickey for core from 10.200.16.10 port 46158 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:51:43.135000 audit[2012]: CRED_ACQ pid=2012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:43.135000 audit[2012]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3e810a0 a2=3 a3=1 items=0 ppid=1 pid=2012 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:43.135000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:51:43.137422 sshd[2012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:51:43.142738 systemd-logind[1480]: New session 9 of user core. Jun 25 14:51:43.147530 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:51:43.151000 audit[2012]: USER_START pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:43.152000 audit[2014]: CRED_ACQ pid=2014 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:51:43.411000 audit[2015]: USER_ACCT pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:51:43.413144 sudo[2015]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:51:43.412000 audit[2015]: CRED_REFR pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:51:43.413758 sudo[2015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:51:43.414000 audit[2015]: USER_START pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:51:43.772678 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:51:44.496602 dockerd[2025]: time="2024-06-25T14:51:44.496543661Z" level=info msg="Starting up" Jun 25 14:51:44.529556 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3213938221-merged.mount: Deactivated successfully. Jun 25 14:51:44.600162 systemd[1]: var-lib-docker-metacopy\x2dcheck3485299275-merged.mount: Deactivated successfully. Jun 25 14:51:44.624442 dockerd[2025]: time="2024-06-25T14:51:44.624388257Z" level=info msg="Loading containers: start." Jun 25 14:51:44.678000 audit[2054]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.678000 audit[2054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc1456a50 a2=0 a3=1 items=0 ppid=2025 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.678000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:51:44.680000 audit[2056]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.680000 audit[2056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc20b3640 a2=0 a3=1 items=0 ppid=2025 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.680000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:51:44.682000 audit[2058]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.682000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdc721380 a2=0 a3=1 items=0 ppid=2025 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.682000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:51:44.685000 audit[2060]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.685000 audit[2060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff53b74b0 a2=0 a3=1 items=0 ppid=2025 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.685000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:51:44.687000 audit[2062]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.687000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe7e887e0 a2=0 a3=1 items=0 ppid=2025 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.687000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:51:44.689000 audit[2064]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.689000 audit[2064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffca6ba8e0 a2=0 a3=1 items=0 ppid=2025 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.689000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:51:44.708000 audit[2066]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.708000 audit[2066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc83545a0 a2=0 a3=1 items=0 ppid=2025 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.708000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:51:44.710000 audit[2068]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.710000 audit[2068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc7288860 a2=0 a3=1 items=0 ppid=2025 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.710000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:51:44.712000 audit[2070]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.712000 audit[2070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffd8d58c60 a2=0 a3=1 items=0 ppid=2025 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.712000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:51:44.729000 audit[2074]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.729000 audit[2074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcee369d0 a2=0 a3=1 items=0 ppid=2025 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.729000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:51:44.730000 audit[2075]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.730000 audit[2075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe7e71650 a2=0 a3=1 items=0 ppid=2025 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.730000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:51:44.771258 kernel: Initializing XFRM netlink socket Jun 25 14:51:44.831000 audit[2083]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.831000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc01376b0 a2=0 a3=1 items=0 ppid=2025 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.831000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:51:44.842000 audit[2086]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.842000 audit[2086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe29f4ef0 a2=0 a3=1 items=0 ppid=2025 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:51:44.847000 audit[2090]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.847000 audit[2090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe6b77750 a2=0 a3=1 items=0 ppid=2025 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.847000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:51:44.849000 audit[2092]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.849000 audit[2092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcfdb0760 a2=0 a3=1 items=0 ppid=2025 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.849000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:51:44.852000 audit[2094]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.852000 audit[2094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe758aab0 a2=0 a3=1 items=0 ppid=2025 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.852000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:51:44.854000 audit[2096]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.854000 audit[2096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe029ca60 a2=0 a3=1 items=0 ppid=2025 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.854000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:51:44.857000 audit[2098]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.857000 audit[2098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc380ecf0 a2=0 a3=1 items=0 ppid=2025 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.857000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:51:44.860000 audit[2100]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.860000 audit[2100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffedd69290 a2=0 a3=1 items=0 ppid=2025 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.860000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:51:44.862000 audit[2102]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.862000 audit[2102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffd693f220 a2=0 a3=1 items=0 ppid=2025 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.862000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:51:44.864000 audit[2104]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.864000 audit[2104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffffde54c80 a2=0 a3=1 items=0 ppid=2025 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.864000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:51:44.867000 audit[2106]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.867000 audit[2106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe9055110 a2=0 a3=1 items=0 ppid=2025 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.867000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:51:44.869115 systemd-networkd[1257]: docker0: Link UP Jun 25 14:51:44.883000 audit[2110]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.883000 audit[2110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeb388100 a2=0 a3=1 items=0 ppid=2025 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.883000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:51:44.884000 audit[2111]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:51:44.884000 audit[2111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc0aed8a0 a2=0 a3=1 items=0 ppid=2025 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:51:44.884000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:51:44.886419 dockerd[2025]: time="2024-06-25T14:51:44.886378230Z" level=info msg="Loading containers: done." Jun 25 14:51:45.250154 dockerd[2025]: time="2024-06-25T14:51:45.249391869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:51:45.250325 dockerd[2025]: time="2024-06-25T14:51:45.250218258Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:51:45.250419 dockerd[2025]: time="2024-06-25T14:51:45.250393024Z" level=info msg="Daemon has completed initialization" Jun 25 14:51:45.294350 dockerd[2025]: time="2024-06-25T14:51:45.294288919Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:51:45.297618 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:51:45.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:45.527471 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3572413890-merged.mount: Deactivated successfully. Jun 25 14:51:46.725014 containerd[1520]: time="2024-06-25T14:51:46.724971215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jun 25 14:51:47.513992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519668507.mount: Deactivated successfully. Jun 25 14:51:49.822155 containerd[1520]: time="2024-06-25T14:51:49.822087399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:49.824377 containerd[1520]: time="2024-06-25T14:51:49.824330587Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256347" Jun 25 14:51:49.827383 containerd[1520]: time="2024-06-25T14:51:49.827340359Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:49.831328 containerd[1520]: time="2024-06-25T14:51:49.831273719Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:49.835605 containerd[1520]: time="2024-06-25T14:51:49.835562610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:49.838286 containerd[1520]: time="2024-06-25T14:51:49.838213091Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 3.112776101s" Jun 25 14:51:49.838477 containerd[1520]: time="2024-06-25T14:51:49.838457179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jun 25 14:51:49.860725 containerd[1520]: time="2024-06-25T14:51:49.860686217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jun 25 14:51:51.779157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 25 14:51:51.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:51.779368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:51.784177 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 14:51:51.784290 kernel: audit: type=1130 audit(1719327111.778:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:51.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:51.815501 kernel: audit: type=1131 audit(1719327111.778:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:51.816616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:51:51.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:51.916002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:51:51.933273 kernel: audit: type=1130 audit(1719327111.915:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:51:52.177527 kubelet[2221]: E0625 14:51:52.177375 2221 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:51:52.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:51:52.180345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:51:52.180487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:51:52.198282 kernel: audit: type=1131 audit(1719327112.179:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:51:52.610262 containerd[1520]: time="2024-06-25T14:51:52.610189661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:52.613252 containerd[1520]: time="2024-06-25T14:51:52.613177745Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228084" Jun 25 14:51:52.617077 containerd[1520]: time="2024-06-25T14:51:52.617042494Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:52.621986 containerd[1520]: time="2024-06-25T14:51:52.621937071Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:52.627368 containerd[1520]: time="2024-06-25T14:51:52.627324503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:52.629520 containerd[1520]: time="2024-06-25T14:51:52.629389081Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 2.768460297s" Jun 25 14:51:52.629700 containerd[1520]: time="2024-06-25T14:51:52.629679769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jun 25 14:51:52.651938 containerd[1520]: time="2024-06-25T14:51:52.651884155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jun 25 14:51:54.186570 containerd[1520]: time="2024-06-25T14:51:54.186520803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:54.188409 containerd[1520]: time="2024-06-25T14:51:54.188370573Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578348" Jun 25 14:51:54.192127 containerd[1520]: time="2024-06-25T14:51:54.192099832Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:54.196248 containerd[1520]: time="2024-06-25T14:51:54.196197422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:54.200353 containerd[1520]: time="2024-06-25T14:51:54.200306371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:54.201790 containerd[1520]: time="2024-06-25T14:51:54.201741170Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.549801253s" Jun 25 14:51:54.201790 containerd[1520]: time="2024-06-25T14:51:54.201783011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jun 25 14:51:54.226042 containerd[1520]: time="2024-06-25T14:51:54.225988097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jun 25 14:51:55.634882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738146590.mount: Deactivated successfully. Jun 25 14:51:55.969107 containerd[1520]: time="2024-06-25T14:51:55.968987738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:55.971804 containerd[1520]: time="2024-06-25T14:51:55.971758690Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052710" Jun 25 14:51:55.977926 containerd[1520]: time="2024-06-25T14:51:55.977895689Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:55.981809 containerd[1520]: time="2024-06-25T14:51:55.981753750Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:55.984680 containerd[1520]: time="2024-06-25T14:51:55.984641225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:55.986525 containerd[1520]: time="2024-06-25T14:51:55.986473152Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.760433334s" Jun 25 14:51:55.986709 containerd[1520]: time="2024-06-25T14:51:55.986669957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jun 25 14:51:56.018063 containerd[1520]: time="2024-06-25T14:51:56.018026722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 14:51:56.678090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038267705.mount: Deactivated successfully. Jun 25 14:51:58.418765 containerd[1520]: time="2024-06-25T14:51:58.418713291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:58.421560 containerd[1520]: time="2024-06-25T14:51:58.421504158Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jun 25 14:51:58.424583 containerd[1520]: time="2024-06-25T14:51:58.424552192Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:58.428458 containerd[1520]: time="2024-06-25T14:51:58.428407324Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:58.434449 containerd[1520]: time="2024-06-25T14:51:58.434405589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:58.435584 containerd[1520]: time="2024-06-25T14:51:58.435534616Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.417237248s" Jun 25 14:51:58.435673 containerd[1520]: time="2024-06-25T14:51:58.435589497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jun 25 14:51:58.457926 containerd[1520]: time="2024-06-25T14:51:58.457876153Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:51:59.122499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692688745.mount: Deactivated successfully. Jun 25 14:51:59.144108 containerd[1520]: time="2024-06-25T14:51:59.144060530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:59.146002 containerd[1520]: time="2024-06-25T14:51:59.145957215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 14:51:59.151611 containerd[1520]: time="2024-06-25T14:51:59.151578307Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:59.155583 containerd[1520]: time="2024-06-25T14:51:59.155525319Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:59.161300 containerd[1520]: time="2024-06-25T14:51:59.161217373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:51:59.162366 containerd[1520]: time="2024-06-25T14:51:59.162324399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 704.398404ms" Jun 25 14:51:59.162523 containerd[1520]: time="2024-06-25T14:51:59.162503363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:51:59.185052 containerd[1520]: time="2024-06-25T14:51:59.185003370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 14:51:59.859291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711197171.mount: Deactivated successfully. Jun 25 14:52:01.612430 containerd[1520]: time="2024-06-25T14:52:01.612370085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:01.615312 containerd[1520]: time="2024-06-25T14:52:01.615266829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jun 25 14:52:01.619949 containerd[1520]: time="2024-06-25T14:52:01.619908053Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:01.624758 containerd[1520]: time="2024-06-25T14:52:01.624710160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:01.629388 containerd[1520]: time="2024-06-25T14:52:01.629332783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:01.630860 containerd[1520]: time="2024-06-25T14:52:01.630807016Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.445753764s" Jun 25 14:52:01.632384 containerd[1520]: time="2024-06-25T14:52:01.630864977Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 14:52:02.215421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 25 14:52:02.215651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:02.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:02.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:02.249052 kernel: audit: type=1130 audit(1719327122.215:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:02.249141 kernel: audit: type=1131 audit(1719327122.215:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:02.251446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:52:02.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:02.398781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:02.417363 kernel: audit: type=1130 audit(1719327122.398:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:02.473614 kubelet[2375]: E0625 14:52:02.472977 2375 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:52:02.476337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:52:02.476522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:52:02.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:52:02.498410 kernel: audit: type=1131 audit(1719327122.476:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:52:07.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:07.887519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:07.923536 kernel: audit: type=1130 audit(1719327127.887:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:07.923572 kernel: audit: type=1131 audit(1719327127.887:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:07.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:07.924490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:52:07.957238 systemd[1]: Reloading. Jun 25 14:52:08.138510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:52:08.206000 audit: BPF prog-id=71 op=LOAD Jun 25 14:52:08.223195 kernel: audit: type=1334 audit(1719327128.206:219): prog-id=71 op=LOAD Jun 25 14:52:08.223325 kernel: audit: type=1334 audit(1719327128.206:220): prog-id=57 op=UNLOAD Jun 25 14:52:08.223351 kernel: audit: type=1334 audit(1719327128.211:221): prog-id=72 op=LOAD Jun 25 14:52:08.206000 audit: BPF prog-id=57 op=UNLOAD Jun 25 14:52:08.211000 audit: BPF prog-id=72 op=LOAD Jun 25 14:52:08.211000 audit: BPF prog-id=58 op=UNLOAD Jun 25 14:52:08.229040 kernel: audit: type=1334 audit(1719327128.211:222): prog-id=58 op=UNLOAD Jun 25 14:52:08.217000 audit: BPF prog-id=73 op=LOAD Jun 25 14:52:08.234842 kernel: audit: type=1334 audit(1719327128.217:223): prog-id=73 op=LOAD Jun 25 14:52:08.217000 audit: BPF prog-id=59 op=UNLOAD Jun 25 14:52:08.240561 kernel: audit: type=1334 audit(1719327128.217:224): prog-id=59 op=UNLOAD Jun 25 14:52:08.217000 audit: BPF prog-id=74 op=LOAD Jun 25 14:52:08.246890 kernel: audit: type=1334 audit(1719327128.217:225): prog-id=74 op=LOAD Jun 25 14:52:08.222000 audit: BPF prog-id=75 op=LOAD Jun 25 14:52:08.252420 kernel: audit: type=1334 audit(1719327128.222:226): prog-id=75 op=LOAD Jun 25 14:52:08.222000 audit: BPF prog-id=60 op=UNLOAD Jun 25 14:52:08.222000 audit: BPF prog-id=61 op=UNLOAD Jun 25 14:52:08.228000 audit: BPF prog-id=76 op=LOAD Jun 25 14:52:08.228000 audit: BPF prog-id=62 op=UNLOAD Jun 25 14:52:08.229000 audit: BPF prog-id=77 op=LOAD Jun 25 14:52:08.229000 audit: BPF prog-id=78 op=LOAD Jun 25 14:52:08.229000 audit: BPF prog-id=63 op=UNLOAD Jun 25 14:52:08.229000 audit: BPF prog-id=64 op=UNLOAD Jun 25 14:52:08.234000 audit: BPF prog-id=79 op=LOAD Jun 25 14:52:08.234000 audit: BPF prog-id=65 op=UNLOAD Jun 25 14:52:08.235000 audit: BPF prog-id=80 op=LOAD Jun 25 14:52:08.240000 audit: BPF prog-id=81 op=LOAD Jun 25 14:52:08.240000 audit: BPF prog-id=66 op=UNLOAD Jun 25 14:52:08.240000 audit: BPF prog-id=67 op=UNLOAD Jun 25 14:52:08.240000 audit: BPF prog-id=82 op=LOAD Jun 25 14:52:08.246000 audit: BPF prog-id=83 op=LOAD Jun 25 14:52:08.246000 audit: BPF prog-id=68 op=UNLOAD Jun 25 14:52:08.246000 audit: BPF prog-id=69 op=UNLOAD Jun 25 14:52:08.252000 audit: BPF prog-id=84 op=LOAD Jun 25 14:52:08.252000 audit: BPF prog-id=70 op=UNLOAD Jun 25 14:52:08.281528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:08.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:08.289547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:52:08.290292 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:52:08.290634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:08.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:08.297031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:52:08.391398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:08.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:08.663016 kubelet[2518]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:52:08.663418 kubelet[2518]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:52:08.663468 kubelet[2518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:52:08.663646 kubelet[2518]: I0625 14:52:08.663608 2518 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:52:09.410308 kubelet[2518]: I0625 14:52:09.410274 2518 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 14:52:09.410526 kubelet[2518]: I0625 14:52:09.410513 2518 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:52:09.410809 kubelet[2518]: I0625 14:52:09.410792 2518 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 14:52:09.904513 kubelet[2518]: I0625 14:52:09.904482 2518 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:52:09.906391 kubelet[2518]: E0625 14:52:09.906361 2518 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:09.916181 kubelet[2518]: I0625 14:52:09.916147 2518 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:52:09.916450 kubelet[2518]: I0625 14:52:09.916429 2518 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:52:09.916641 kubelet[2518]: I0625 14:52:09.916621 2518 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:52:09.916739 kubelet[2518]: I0625 14:52:09.916643 2518 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:52:09.916739 kubelet[2518]: I0625 14:52:09.916652 2518 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:52:09.918479 kubelet[2518]: I0625 14:52:09.918446 2518 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:52:09.920693 kubelet[2518]: I0625 14:52:09.920667 2518 kubelet.go:396] "Attempting to sync node with API server" Jun 25 14:52:09.920763 kubelet[2518]: I0625 14:52:09.920701 2518 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:52:09.920763 kubelet[2518]: I0625 14:52:09.920728 2518 kubelet.go:312] "Adding apiserver pod source" Jun 25 14:52:09.920763 kubelet[2518]: I0625 14:52:09.920741 2518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:52:09.922733 kubelet[2518]: W0625 14:52:09.922518 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:09.922733 kubelet[2518]: E0625 14:52:09.922631 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:09.923013 kubelet[2518]: W0625 14:52:09.922970 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-39232a46a6&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:09.923067 kubelet[2518]: E0625 14:52:09.923016 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-39232a46a6&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:09.923114 kubelet[2518]: I0625 14:52:09.923089 2518 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:52:09.923424 kubelet[2518]: I0625 14:52:09.923402 2518 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 14:52:09.923478 kubelet[2518]: W0625 14:52:09.923457 2518 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:52:09.924200 kubelet[2518]: I0625 14:52:09.924161 2518 server.go:1256] "Started kubelet" Jun 25 14:52:09.929078 kubelet[2518]: I0625 14:52:09.929036 2518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:52:09.929902 kubelet[2518]: E0625 14:52:09.929884 2518 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.4-a-39232a46a6.17dc46ea4fd1e757 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.4-a-39232a46a6,UID:ci-3815.2.4-a-39232a46a6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.4-a-39232a46a6,},FirstTimestamp:2024-06-25 14:52:09.924134743 +0000 UTC m=+1.527359353,LastTimestamp:2024-06-25 14:52:09.924134743 +0000 UTC m=+1.527359353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.4-a-39232a46a6,}" Jun 25 14:52:09.931915 kubelet[2518]: I0625 14:52:09.931892 2518 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:52:09.932000 audit[2528]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:09.932000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe55fd9e0 a2=0 a3=1 items=0 ppid=2518 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:09.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:52:09.933122 kubelet[2518]: I0625 14:52:09.933098 2518 server.go:461] "Adding debug handlers to kubelet server" Jun 25 14:52:09.933000 audit[2529]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:09.933000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca1677c0 a2=0 a3=1 items=0 ppid=2518 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:09.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:52:09.934385 kubelet[2518]: I0625 14:52:09.934365 2518 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 14:52:09.934673 kubelet[2518]: I0625 14:52:09.934641 2518 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:52:09.934741 kubelet[2518]: I0625 14:52:09.934655 2518 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:52:09.935560 kubelet[2518]: E0625 14:52:09.935542 2518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-39232a46a6?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Jun 25 14:52:09.936215 kubelet[2518]: E0625 14:52:09.936198 2518 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:52:09.936772 kubelet[2518]: I0625 14:52:09.936756 2518 factory.go:221] Registration of the systemd container factory successfully Jun 25 14:52:09.936000 audit[2531]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:09.936000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd9b84f10 a2=0 a3=1 items=0 ppid=2518 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:09.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:52:09.937114 kubelet[2518]: I0625 14:52:09.937096 2518 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 14:52:09.938633 kubelet[2518]: I0625 14:52:09.938600 2518 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:52:09.938723 kubelet[2518]: I0625 14:52:09.938678 2518 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:52:09.938778 kubelet[2518]: I0625 14:52:09.938613 2518 factory.go:221] Registration of the containerd container factory successfully Jun 25 14:52:09.939000 audit[2533]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:09.939000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffce102150 a2=0 a3=1 items=0 ppid=2518 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:09.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:52:09.950280 kubelet[2518]: W0625 14:52:09.950200 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:09.950280 kubelet[2518]: E0625 14:52:09.950277 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:10.021377 kubelet[2518]: I0625 14:52:10.021349 2518 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:52:10.021564 kubelet[2518]: I0625 14:52:10.021553 2518 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:52:10.021647 kubelet[2518]: I0625 14:52:10.021638 2518 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:52:10.037862 kubelet[2518]: I0625 14:52:10.037833 2518 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.038811 kubelet[2518]: E0625 14:52:10.038794 2518 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.136158 kubelet[2518]: E0625 14:52:10.136122 2518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-39232a46a6?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Jun 25 14:52:10.202343 kubelet[2518]: I0625 14:52:10.202194 2518 policy_none.go:49] "None policy: Start" Jun 25 14:52:10.203281 kubelet[2518]: I0625 14:52:10.203263 2518 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 14:52:10.203410 kubelet[2518]: I0625 14:52:10.203399 2518 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:52:10.210000 audit[2540]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:10.210000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc88f9e90 a2=0 a3=1 items=0 ppid=2518 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:52:10.211350 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 14:52:10.212101 kubelet[2518]: I0625 14:52:10.212069 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:52:10.211000 audit[2542]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:10.211000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc8c559a0 a2=0 a3=1 items=0 ppid=2518 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:52:10.213000 audit[2543]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:10.213000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5e0c3b0 a2=0 a3=1 items=0 ppid=2518 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:52:10.214815 kubelet[2518]: I0625 14:52:10.214791 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:52:10.215539 kubelet[2518]: I0625 14:52:10.215518 2518 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:52:10.215654 kubelet[2518]: I0625 14:52:10.215642 2518 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 14:52:10.214000 audit[2544]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:10.214000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda27a810 a2=0 a3=1 items=0 ppid=2518 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:52:10.216113 kubelet[2518]: E0625 14:52:10.216093 2518 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:52:10.217000 audit[2545]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:10.217000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffded8d300 a2=0 a3=1 items=0 ppid=2518 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.217000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:52:10.218480 kubelet[2518]: W0625 14:52:10.218434 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:10.218617 kubelet[2518]: E0625 14:52:10.218604 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:10.222481 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 14:52:10.221000 audit[2546]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:10.221000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4432150 a2=0 a3=1 items=0 ppid=2518 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:52:10.223000 audit[2547]: NETFILTER_CFG table=nat:39 family=10 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:10.223000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffcbb9e310 a2=0 a3=1 items=0 ppid=2518 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:52:10.224000 audit[2548]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:10.224000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd283a560 a2=0 a3=1 items=0 ppid=2518 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:10.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:52:10.232821 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 14:52:10.234966 kubelet[2518]: I0625 14:52:10.234940 2518 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:52:10.236428 kubelet[2518]: I0625 14:52:10.236007 2518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:52:10.237782 kubelet[2518]: E0625 14:52:10.237753 2518 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.4-a-39232a46a6\" not found" Jun 25 14:52:10.241338 kubelet[2518]: I0625 14:52:10.241313 2518 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.241922 kubelet[2518]: E0625 14:52:10.241905 2518 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.316443 kubelet[2518]: I0625 14:52:10.316406 2518 topology_manager.go:215] "Topology Admit Handler" podUID="0ed2591aeadcd7f1b0d2fd3658588dc8" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.317978 kubelet[2518]: I0625 14:52:10.317947 2518 topology_manager.go:215] "Topology Admit Handler" podUID="0beab0512939231dbe35da11b8acbbcb" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.319483 kubelet[2518]: I0625 14:52:10.319462 2518 topology_manager.go:215] "Topology Admit Handler" podUID="92af5f882bd1801e4f4ced21815b7dfe" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.325417 systemd[1]: Created slice kubepods-burstable-pod0ed2591aeadcd7f1b0d2fd3658588dc8.slice - libcontainer container kubepods-burstable-pod0ed2591aeadcd7f1b0d2fd3658588dc8.slice. Jun 25 14:52:10.336513 systemd[1]: Created slice kubepods-burstable-pod0beab0512939231dbe35da11b8acbbcb.slice - libcontainer container kubepods-burstable-pod0beab0512939231dbe35da11b8acbbcb.slice. Jun 25 14:52:10.341073 kubelet[2518]: I0625 14:52:10.341040 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341324 kubelet[2518]: I0625 14:52:10.341301 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341434 kubelet[2518]: I0625 14:52:10.341423 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341539 kubelet[2518]: I0625 14:52:10.341527 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341633 kubelet[2518]: I0625 14:52:10.341622 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92af5f882bd1801e4f4ced21815b7dfe-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-39232a46a6\" (UID: \"92af5f882bd1801e4f4ced21815b7dfe\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341722 kubelet[2518]: I0625 14:52:10.341712 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ed2591aeadcd7f1b0d2fd3658588dc8-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-39232a46a6\" (UID: \"0ed2591aeadcd7f1b0d2fd3658588dc8\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341802 kubelet[2518]: I0625 14:52:10.341793 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ed2591aeadcd7f1b0d2fd3658588dc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-39232a46a6\" (UID: \"0ed2591aeadcd7f1b0d2fd3658588dc8\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341879 kubelet[2518]: I0625 14:52:10.341870 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.341975 kubelet[2518]: I0625 14:52:10.341965 2518 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ed2591aeadcd7f1b0d2fd3658588dc8-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-39232a46a6\" (UID: \"0ed2591aeadcd7f1b0d2fd3658588dc8\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.346683 systemd[1]: Created slice kubepods-burstable-pod92af5f882bd1801e4f4ced21815b7dfe.slice - libcontainer container kubepods-burstable-pod92af5f882bd1801e4f4ced21815b7dfe.slice. Jun 25 14:52:10.537373 kubelet[2518]: E0625 14:52:10.537349 2518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-39232a46a6?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Jun 25 14:52:10.635860 containerd[1520]: time="2024-06-25T14:52:10.635810801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-39232a46a6,Uid:0ed2591aeadcd7f1b0d2fd3658588dc8,Namespace:kube-system,Attempt:0,}" Jun 25 14:52:10.639998 containerd[1520]: time="2024-06-25T14:52:10.639928274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-39232a46a6,Uid:0beab0512939231dbe35da11b8acbbcb,Namespace:kube-system,Attempt:0,}" Jun 25 14:52:10.644457 kubelet[2518]: I0625 14:52:10.644424 2518 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.644762 kubelet[2518]: E0625 14:52:10.644744 2518 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:10.650580 containerd[1520]: time="2024-06-25T14:52:10.650533463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-39232a46a6,Uid:92af5f882bd1801e4f4ced21815b7dfe,Namespace:kube-system,Attempt:0,}" Jun 25 14:52:11.035527 kubelet[2518]: W0625 14:52:11.035444 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.035527 kubelet[2518]: E0625 14:52:11.035508 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.226259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308673155.mount: Deactivated successfully. Jun 25 14:52:11.258315 containerd[1520]: time="2024-06-25T14:52:11.258262126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.261703 containerd[1520]: time="2024-06-25T14:52:11.261658505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 14:52:11.267292 containerd[1520]: time="2024-06-25T14:52:11.267213042Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.268559 containerd[1520]: time="2024-06-25T14:52:11.268509064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:52:11.271509 containerd[1520]: time="2024-06-25T14:52:11.271467356Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.275953 containerd[1520]: time="2024-06-25T14:52:11.275904233Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.282398 containerd[1520]: time="2024-06-25T14:52:11.282357426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:52:11.286057 containerd[1520]: time="2024-06-25T14:52:11.285495681Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.289631 containerd[1520]: time="2024-06-25T14:52:11.289588312Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.293609 containerd[1520]: time="2024-06-25T14:52:11.293562701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.297632 containerd[1520]: time="2024-06-25T14:52:11.297590651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.299022 containerd[1520]: time="2024-06-25T14:52:11.298981956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 663.050313ms" Jun 25 14:52:11.300970 containerd[1520]: time="2024-06-25T14:52:11.300935910Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.308339 containerd[1520]: time="2024-06-25T14:52:11.308297558Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.314512 containerd[1520]: time="2024-06-25T14:52:11.314458106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.315489 containerd[1520]: time="2024-06-25T14:52:11.315448163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 664.811218ms" Jun 25 14:52:11.316619 containerd[1520]: time="2024-06-25T14:52:11.316586663Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:52:11.317626 containerd[1520]: time="2024-06-25T14:52:11.317593240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 677.567484ms" Jun 25 14:52:11.338730 kubelet[2518]: E0625 14:52:11.338696 2518 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-39232a46a6?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Jun 25 14:52:11.387409 kubelet[2518]: W0625 14:52:11.387326 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-39232a46a6&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.387409 kubelet[2518]: E0625 14:52:11.387387 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-39232a46a6&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.413939 kubelet[2518]: W0625 14:52:11.413871 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.413939 kubelet[2518]: E0625 14:52:11.413911 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.446926 kubelet[2518]: I0625 14:52:11.446652 2518 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:11.447068 kubelet[2518]: E0625 14:52:11.446967 2518 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:11.449442 kubelet[2518]: W0625 14:52:11.449368 2518 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.449442 kubelet[2518]: E0625 14:52:11.449424 2518 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.935613 kubelet[2518]: E0625 14:52:11.935569 2518 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Jun 25 14:52:11.952125 containerd[1520]: time="2024-06-25T14:52:11.951999300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:52:11.952519 containerd[1520]: time="2024-06-25T14:52:11.952147422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:11.952519 containerd[1520]: time="2024-06-25T14:52:11.952187983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:52:11.952519 containerd[1520]: time="2024-06-25T14:52:11.952285065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:11.954316 containerd[1520]: time="2024-06-25T14:52:11.954187698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:52:11.954316 containerd[1520]: time="2024-06-25T14:52:11.954274539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:11.954316 containerd[1520]: time="2024-06-25T14:52:11.954291019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:52:11.956687 containerd[1520]: time="2024-06-25T14:52:11.954510463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:11.957105 containerd[1520]: time="2024-06-25T14:52:11.956877905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:52:11.957105 containerd[1520]: time="2024-06-25T14:52:11.956925545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:11.957105 containerd[1520]: time="2024-06-25T14:52:11.956942026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:52:11.957105 containerd[1520]: time="2024-06-25T14:52:11.956952466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:11.995547 systemd[1]: Started cri-containerd-07ea1d44177469ff745369f8b84c0a64f18003e2025ba16e85605cb606982fe0.scope - libcontainer container 07ea1d44177469ff745369f8b84c0a64f18003e2025ba16e85605cb606982fe0. Jun 25 14:52:11.999369 systemd[1]: Started cri-containerd-1332a1e0706f9bc803732328f48b13f91efba8d118ef2a4799c4ed25e8fd9e0f.scope - libcontainer container 1332a1e0706f9bc803732328f48b13f91efba8d118ef2a4799c4ed25e8fd9e0f. Jun 25 14:52:12.000359 systemd[1]: Started cri-containerd-1a8ea7facb37572965c6dff94fcf54fcc3287a29604e3926e9c5d5015585a3e8.scope - libcontainer container 1a8ea7facb37572965c6dff94fcf54fcc3287a29604e3926e9c5d5015585a3e8. Jun 25 14:52:12.012000 audit: BPF prog-id=85 op=LOAD Jun 25 14:52:12.013000 audit: BPF prog-id=86 op=LOAD Jun 25 14:52:12.013000 audit[2603]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2576 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037656131643434313737343639666637343533363966386238346330 Jun 25 14:52:12.013000 audit: BPF prog-id=87 op=LOAD Jun 25 14:52:12.013000 audit[2603]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2576 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037656131643434313737343639666637343533363966386238346330 Jun 25 14:52:12.013000 audit: BPF prog-id=87 op=UNLOAD Jun 25 14:52:12.013000 audit: BPF prog-id=86 op=UNLOAD Jun 25 14:52:12.013000 audit: BPF prog-id=88 op=LOAD Jun 25 14:52:12.013000 audit[2603]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2576 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037656131643434313737343639666637343533363966386238346330 Jun 25 14:52:12.015000 audit: BPF prog-id=89 op=LOAD Jun 25 14:52:12.016000 audit: BPF prog-id=90 op=LOAD Jun 25 14:52:12.016000 audit[2618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2583 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386561376661636233373537323936356336646666393466636635 Jun 25 14:52:12.016000 audit: BPF prog-id=91 op=LOAD Jun 25 14:52:12.016000 audit[2618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2583 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386561376661636233373537323936356336646666393466636635 Jun 25 14:52:12.016000 audit: BPF prog-id=91 op=UNLOAD Jun 25 14:52:12.016000 audit: BPF prog-id=90 op=UNLOAD Jun 25 14:52:12.016000 audit: BPF prog-id=92 op=LOAD Jun 25 14:52:12.016000 audit[2618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2583 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161386561376661636233373537323936356336646666393466636635 Jun 25 14:52:12.020000 audit: BPF prog-id=93 op=LOAD Jun 25 14:52:12.021000 audit: BPF prog-id=94 op=LOAD Jun 25 14:52:12.021000 audit[2612]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2577 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333261316530373036663962633830333733323332386634386231 Jun 25 14:52:12.021000 audit: BPF prog-id=95 op=LOAD Jun 25 14:52:12.021000 audit[2612]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2577 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333261316530373036663962633830333733323332386634386231 Jun 25 14:52:12.021000 audit: BPF prog-id=95 op=UNLOAD Jun 25 14:52:12.021000 audit: BPF prog-id=94 op=UNLOAD Jun 25 14:52:12.022000 audit: BPF prog-id=96 op=LOAD Jun 25 14:52:12.022000 audit[2612]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2577 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133333261316530373036663962633830333733323332386634386231 Jun 25 14:52:12.051256 containerd[1520]: time="2024-06-25T14:52:12.051160128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-39232a46a6,Uid:0beab0512939231dbe35da11b8acbbcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a8ea7facb37572965c6dff94fcf54fcc3287a29604e3926e9c5d5015585a3e8\"" Jun 25 14:52:12.056875 containerd[1520]: time="2024-06-25T14:52:12.056780824Z" level=info msg="CreateContainer within sandbox \"1a8ea7facb37572965c6dff94fcf54fcc3287a29604e3926e9c5d5015585a3e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:52:12.059264 containerd[1520]: time="2024-06-25T14:52:12.059183505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-39232a46a6,Uid:0ed2591aeadcd7f1b0d2fd3658588dc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1332a1e0706f9bc803732328f48b13f91efba8d118ef2a4799c4ed25e8fd9e0f\"" Jun 25 14:52:12.061779 containerd[1520]: time="2024-06-25T14:52:12.061732988Z" level=info msg="CreateContainer within sandbox \"1332a1e0706f9bc803732328f48b13f91efba8d118ef2a4799c4ed25e8fd9e0f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:52:12.064451 containerd[1520]: time="2024-06-25T14:52:12.064401793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-39232a46a6,Uid:92af5f882bd1801e4f4ced21815b7dfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"07ea1d44177469ff745369f8b84c0a64f18003e2025ba16e85605cb606982fe0\"" Jun 25 14:52:12.067831 containerd[1520]: time="2024-06-25T14:52:12.067793931Z" level=info msg="CreateContainer within sandbox \"07ea1d44177469ff745369f8b84c0a64f18003e2025ba16e85605cb606982fe0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:52:12.120766 containerd[1520]: time="2024-06-25T14:52:12.120699672Z" level=info msg="CreateContainer within sandbox \"1a8ea7facb37572965c6dff94fcf54fcc3287a29604e3926e9c5d5015585a3e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d\"" Jun 25 14:52:12.122340 containerd[1520]: time="2024-06-25T14:52:12.122293979Z" level=info msg="StartContainer for \"21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d\"" Jun 25 14:52:12.128714 containerd[1520]: time="2024-06-25T14:52:12.128665728Z" level=info msg="CreateContainer within sandbox \"1332a1e0706f9bc803732328f48b13f91efba8d118ef2a4799c4ed25e8fd9e0f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"efb90fb7662b11b380ff17dd6676b157d81b5525ca5e2e8ebda9c5625399a629\"" Jun 25 14:52:12.129381 containerd[1520]: time="2024-06-25T14:52:12.129346379Z" level=info msg="StartContainer for \"efb90fb7662b11b380ff17dd6676b157d81b5525ca5e2e8ebda9c5625399a629\"" Jun 25 14:52:12.137548 containerd[1520]: time="2024-06-25T14:52:12.137499038Z" level=info msg="CreateContainer within sandbox \"07ea1d44177469ff745369f8b84c0a64f18003e2025ba16e85605cb606982fe0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8\"" Jun 25 14:52:12.138285 containerd[1520]: time="2024-06-25T14:52:12.138223730Z" level=info msg="StartContainer for \"518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8\"" Jun 25 14:52:12.152451 systemd[1]: Started cri-containerd-21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d.scope - libcontainer container 21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d. Jun 25 14:52:12.168429 systemd[1]: Started cri-containerd-efb90fb7662b11b380ff17dd6676b157d81b5525ca5e2e8ebda9c5625399a629.scope - libcontainer container efb90fb7662b11b380ff17dd6676b157d81b5525ca5e2e8ebda9c5625399a629. Jun 25 14:52:12.169000 audit: BPF prog-id=97 op=LOAD Jun 25 14:52:12.170000 audit: BPF prog-id=98 op=LOAD Jun 25 14:52:12.170000 audit[2692]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2583 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663431633333643263623064663437366636306236316135336432 Jun 25 14:52:12.170000 audit: BPF prog-id=99 op=LOAD Jun 25 14:52:12.170000 audit[2692]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=19 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2583 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663431633333643263623064663437366636306236316135336432 Jun 25 14:52:12.170000 audit: BPF prog-id=99 op=UNLOAD Jun 25 14:52:12.170000 audit: BPF prog-id=98 op=UNLOAD Jun 25 14:52:12.170000 audit: BPF prog-id=100 op=LOAD Jun 25 14:52:12.170000 audit[2692]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2583 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663431633333643263623064663437366636306236316135336432 Jun 25 14:52:12.189000 audit: BPF prog-id=101 op=LOAD Jun 25 14:52:12.190000 audit: BPF prog-id=102 op=LOAD Jun 25 14:52:12.190000 audit[2705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2577 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566623930666237363632623131623338306666313764643636373662 Jun 25 14:52:12.191000 audit: BPF prog-id=103 op=LOAD Jun 25 14:52:12.191000 audit[2705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2577 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566623930666237363632623131623338306666313764643636373662 Jun 25 14:52:12.191000 audit: BPF prog-id=103 op=UNLOAD Jun 25 14:52:12.193000 audit: BPF prog-id=102 op=UNLOAD Jun 25 14:52:12.195000 audit: BPF prog-id=104 op=LOAD Jun 25 14:52:12.195000 audit[2705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2577 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566623930666237363632623131623338306666313764643636373662 Jun 25 14:52:12.205474 systemd[1]: Started cri-containerd-518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8.scope - libcontainer container 518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8. Jun 25 14:52:12.232325 containerd[1520]: time="2024-06-25T14:52:12.232272172Z" level=info msg="StartContainer for \"21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d\" returns successfully" Jun 25 14:52:12.236000 audit: BPF prog-id=105 op=LOAD Jun 25 14:52:12.237000 audit: BPF prog-id=106 op=LOAD Jun 25 14:52:12.237000 audit[2733]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=2576 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383935306336633661633261613639663564343863373637353235 Jun 25 14:52:12.237000 audit: BPF prog-id=107 op=LOAD Jun 25 14:52:12.237000 audit[2733]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=2576 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383935306336633661633261613639663564343863373637353235 Jun 25 14:52:12.237000 audit: BPF prog-id=107 op=UNLOAD Jun 25 14:52:12.237000 audit: BPF prog-id=106 op=UNLOAD Jun 25 14:52:12.237000 audit: BPF prog-id=108 op=LOAD Jun 25 14:52:12.237000 audit[2733]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=2576 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:12.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383935306336633661633261613639663564343863373637353235 Jun 25 14:52:12.252494 containerd[1520]: time="2024-06-25T14:52:12.252446715Z" level=info msg="StartContainer for \"efb90fb7662b11b380ff17dd6676b157d81b5525ca5e2e8ebda9c5625399a629\" returns successfully" Jun 25 14:52:12.281544 containerd[1520]: time="2024-06-25T14:52:12.281486970Z" level=info msg="StartContainer for \"518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8\" returns successfully" Jun 25 14:52:13.048693 kubelet[2518]: I0625 14:52:13.048658 2518 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:14.404000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.410156 kernel: kauditd_printk_skb: 131 callbacks suppressed Jun 25 14:52:14.410300 kernel: audit: type=1400 audit(1719327134.404:298): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.404000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.467522 kernel: audit: type=1400 audit(1719327134.404:299): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.404000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=40072449f0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.498804 kernel: audit: type=1300 audit(1719327134.404:299): arch=c00000b7 syscall=27 success=no exit=-13 a0=40 a1=40072449f0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.404000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.526631 kernel: audit: type=1327 audit(1719327134.404:299): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.425000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.559065 kernel: audit: type=1400 audit(1719327134.425:300): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.425000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=4c a1=4007244ea0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.585225 kernel: audit: type=1300 audit(1719327134.425:300): arch=c00000b7 syscall=27 success=no exit=-13 a0=4c a1=4007244ea0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.425000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.617303 kernel: audit: type=1327 audit(1719327134.425:300): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.439000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.637798 kernel: audit: type=1400 audit(1719327134.439:301): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.439000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=4e a1=40093f02a0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.664121 kernel: audit: type=1300 audit(1719327134.439:301): arch=c00000b7 syscall=27 success=no exit=-13 a0=4e a1=40093f02a0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.664418 kubelet[2518]: E0625 14:52:14.664391 2518 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815.2.4-a-39232a46a6\" not found" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:14.439000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.675528 kubelet[2518]: I0625 14:52:14.675494 2518 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:14.687593 kernel: audit: type=1327 audit(1719327134.439:301): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.404000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=41 a1=40047d2030 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.404000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.553000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.553000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=40006d8ff0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:14.553000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:14.553000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.553000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=4000baed60 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:14.553000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:14.565000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.565000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=4004110840 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.565000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.565000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:14.565000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=4003fcd260 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:52:14.565000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:52:14.925355 kubelet[2518]: I0625 14:52:14.925195 2518 apiserver.go:52] "Watching apiserver" Jun 25 14:52:14.939628 kubelet[2518]: I0625 14:52:14.939587 2518 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:52:17.265491 systemd[1]: Reloading. Jun 25 14:52:17.482878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:52:17.566000 audit: BPF prog-id=109 op=LOAD Jun 25 14:52:17.566000 audit: BPF prog-id=71 op=UNLOAD Jun 25 14:52:17.567000 audit: BPF prog-id=110 op=LOAD Jun 25 14:52:17.567000 audit: BPF prog-id=72 op=UNLOAD Jun 25 14:52:17.568000 audit: BPF prog-id=111 op=LOAD Jun 25 14:52:17.569000 audit: BPF prog-id=73 op=UNLOAD Jun 25 14:52:17.569000 audit: BPF prog-id=112 op=LOAD Jun 25 14:52:17.569000 audit: BPF prog-id=113 op=LOAD Jun 25 14:52:17.569000 audit: BPF prog-id=74 op=UNLOAD Jun 25 14:52:17.569000 audit: BPF prog-id=75 op=UNLOAD Jun 25 14:52:17.571000 audit: BPF prog-id=114 op=LOAD Jun 25 14:52:17.571000 audit: BPF prog-id=93 op=UNLOAD Jun 25 14:52:17.572000 audit: BPF prog-id=115 op=LOAD Jun 25 14:52:17.572000 audit: BPF prog-id=105 op=UNLOAD Jun 25 14:52:17.573000 audit: BPF prog-id=116 op=LOAD Jun 25 14:52:17.573000 audit: BPF prog-id=76 op=UNLOAD Jun 25 14:52:17.573000 audit: BPF prog-id=117 op=LOAD Jun 25 14:52:17.573000 audit: BPF prog-id=118 op=LOAD Jun 25 14:52:17.573000 audit: BPF prog-id=77 op=UNLOAD Jun 25 14:52:17.574000 audit: BPF prog-id=78 op=UNLOAD Jun 25 14:52:17.574000 audit: BPF prog-id=119 op=LOAD Jun 25 14:52:17.574000 audit: BPF prog-id=79 op=UNLOAD Jun 25 14:52:17.575000 audit: BPF prog-id=120 op=LOAD Jun 25 14:52:17.575000 audit: BPF prog-id=121 op=LOAD Jun 25 14:52:17.575000 audit: BPF prog-id=80 op=UNLOAD Jun 25 14:52:17.575000 audit: BPF prog-id=81 op=UNLOAD Jun 25 14:52:17.575000 audit: BPF prog-id=122 op=LOAD Jun 25 14:52:17.576000 audit: BPF prog-id=123 op=LOAD Jun 25 14:52:17.576000 audit: BPF prog-id=82 op=UNLOAD Jun 25 14:52:17.576000 audit: BPF prog-id=83 op=UNLOAD Jun 25 14:52:17.577000 audit: BPF prog-id=124 op=LOAD Jun 25 14:52:17.577000 audit: BPF prog-id=97 op=UNLOAD Jun 25 14:52:17.578000 audit: BPF prog-id=125 op=LOAD Jun 25 14:52:17.578000 audit: BPF prog-id=89 op=UNLOAD Jun 25 14:52:17.580000 audit: BPF prog-id=126 op=LOAD Jun 25 14:52:17.580000 audit: BPF prog-id=101 op=UNLOAD Jun 25 14:52:17.582000 audit: BPF prog-id=127 op=LOAD Jun 25 14:52:17.582000 audit: BPF prog-id=85 op=UNLOAD Jun 25 14:52:17.583000 audit: BPF prog-id=128 op=LOAD Jun 25 14:52:17.583000 audit: BPF prog-id=84 op=UNLOAD Jun 25 14:52:17.608509 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:52:17.626736 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:52:17.626999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:17.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:17.627071 systemd[1]: kubelet.service: Consumed 1.201s CPU time. Jun 25 14:52:17.639201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:52:17.810740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:52:17.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:17.870314 kubelet[2883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:52:17.870654 kubelet[2883]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:52:17.870699 kubelet[2883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:52:17.870857 kubelet[2883]: I0625 14:52:17.870819 2883 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:52:17.875613 kubelet[2883]: I0625 14:52:17.875574 2883 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 25 14:52:17.875613 kubelet[2883]: I0625 14:52:17.875606 2883 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:52:17.875828 kubelet[2883]: I0625 14:52:17.875809 2883 server.go:919] "Client rotation is on, will bootstrap in background" Jun 25 14:52:17.877791 kubelet[2883]: I0625 14:52:17.877764 2883 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:52:17.880163 kubelet[2883]: I0625 14:52:17.879827 2883 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:52:17.894881 kubelet[2883]: I0625 14:52:17.894851 2883 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:52:17.895075 kubelet[2883]: I0625 14:52:17.895056 2883 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:52:17.895262 kubelet[2883]: I0625 14:52:17.895222 2883 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:52:17.895343 kubelet[2883]: I0625 14:52:17.895267 2883 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:52:17.895343 kubelet[2883]: I0625 14:52:17.895277 2883 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:52:17.895343 kubelet[2883]: I0625 14:52:17.895309 2883 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:52:17.895447 kubelet[2883]: I0625 14:52:17.895417 2883 kubelet.go:396] "Attempting to sync node with API server" Jun 25 14:52:17.895447 kubelet[2883]: I0625 14:52:17.895433 2883 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:52:17.895493 kubelet[2883]: I0625 14:52:17.895455 2883 kubelet.go:312] "Adding apiserver pod source" Jun 25 14:52:17.895493 kubelet[2883]: I0625 14:52:17.895470 2883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:52:17.903155 kubelet[2883]: I0625 14:52:17.900687 2883 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:52:17.903155 kubelet[2883]: I0625 14:52:17.900879 2883 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 14:52:17.903155 kubelet[2883]: I0625 14:52:17.901274 2883 server.go:1256] "Started kubelet" Jun 25 14:52:17.903372 kubelet[2883]: I0625 14:52:17.903262 2883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:52:17.918592 kubelet[2883]: E0625 14:52:17.918559 2883 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:52:17.919883 kubelet[2883]: I0625 14:52:17.919862 2883 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:52:17.920710 kubelet[2883]: I0625 14:52:17.920671 2883 server.go:461] "Adding debug handlers to kubelet server" Jun 25 14:52:17.921314 kubelet[2883]: I0625 14:52:17.921284 2883 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:52:17.921724 kubelet[2883]: I0625 14:52:17.921700 2883 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:52:17.921883 kubelet[2883]: I0625 14:52:17.921863 2883 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:52:17.923754 kubelet[2883]: I0625 14:52:17.923730 2883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 14:52:17.923920 kubelet[2883]: I0625 14:52:17.923903 2883 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:52:17.925470 kubelet[2883]: I0625 14:52:17.925428 2883 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 14:52:17.926562 kubelet[2883]: I0625 14:52:17.926538 2883 factory.go:221] Registration of the containerd container factory successfully Jun 25 14:52:17.926562 kubelet[2883]: I0625 14:52:17.926555 2883 factory.go:221] Registration of the systemd container factory successfully Jun 25 14:52:17.935244 kubelet[2883]: I0625 14:52:17.935182 2883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:52:17.936583 kubelet[2883]: I0625 14:52:17.936536 2883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:52:17.936583 kubelet[2883]: I0625 14:52:17.936573 2883 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:52:17.936791 kubelet[2883]: I0625 14:52:17.936598 2883 kubelet.go:2329] "Starting kubelet main sync loop" Jun 25 14:52:17.936791 kubelet[2883]: E0625 14:52:17.936662 2883 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:52:17.991167 kubelet[2883]: I0625 14:52:17.991114 2883 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:52:17.991167 kubelet[2883]: I0625 14:52:17.991142 2883 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:52:17.991167 kubelet[2883]: I0625 14:52:17.991162 2883 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:52:17.991386 kubelet[2883]: I0625 14:52:17.991342 2883 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:52:17.991386 kubelet[2883]: I0625 14:52:17.991365 2883 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:52:17.991386 kubelet[2883]: I0625 14:52:17.991372 2883 policy_none.go:49] "None policy: Start" Jun 25 14:52:17.992062 kubelet[2883]: I0625 14:52:17.992036 2883 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 14:52:17.992100 kubelet[2883]: I0625 14:52:17.992070 2883 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:52:17.992278 kubelet[2883]: I0625 14:52:17.992262 2883 state_mem.go:75] "Updated machine memory state" Jun 25 14:52:18.003896 kubelet[2883]: I0625 14:52:18.003859 2883 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:52:18.009654 kubelet[2883]: I0625 14:52:18.009628 2883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:52:18.035164 kubelet[2883]: I0625 14:52:18.035135 2883 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.037261 kubelet[2883]: I0625 14:52:18.037215 2883 topology_manager.go:215] "Topology Admit Handler" podUID="0ed2591aeadcd7f1b0d2fd3658588dc8" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.039097 kubelet[2883]: I0625 14:52:18.039074 2883 topology_manager.go:215] "Topology Admit Handler" podUID="0beab0512939231dbe35da11b8acbbcb" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.039357 kubelet[2883]: I0625 14:52:18.039342 2883 topology_manager.go:215] "Topology Admit Handler" podUID="92af5f882bd1801e4f4ced21815b7dfe" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.056454 kubelet[2883]: W0625 14:52:18.056424 2883 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:52:18.057626 kubelet[2883]: W0625 14:52:18.057595 2883 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:52:18.058687 kubelet[2883]: W0625 14:52:18.058662 2883 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:52:18.064082 kubelet[2883]: I0625 14:52:18.064050 2883 kubelet_node_status.go:112] "Node was previously registered" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.064433 kubelet[2883]: I0625 14:52:18.064421 2883 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.181000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="sda9" ino=6772516 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 14:52:18.181000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=4000817e80 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:18.181000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:18.223592 kubelet[2883]: I0625 14:52:18.223513 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.223875 kubelet[2883]: I0625 14:52:18.223861 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.224006 kubelet[2883]: I0625 14:52:18.223995 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.224113 kubelet[2883]: I0625 14:52:18.224104 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ed2591aeadcd7f1b0d2fd3658588dc8-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-39232a46a6\" (UID: \"0ed2591aeadcd7f1b0d2fd3658588dc8\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.224222 kubelet[2883]: I0625 14:52:18.224212 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ed2591aeadcd7f1b0d2fd3658588dc8-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-39232a46a6\" (UID: \"0ed2591aeadcd7f1b0d2fd3658588dc8\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.224365 kubelet[2883]: I0625 14:52:18.224353 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ed2591aeadcd7f1b0d2fd3658588dc8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-39232a46a6\" (UID: \"0ed2591aeadcd7f1b0d2fd3658588dc8\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.224524 kubelet[2883]: I0625 14:52:18.224512 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.224639 kubelet[2883]: I0625 14:52:18.224628 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0beab0512939231dbe35da11b8acbbcb-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-39232a46a6\" (UID: \"0beab0512939231dbe35da11b8acbbcb\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.224756 kubelet[2883]: I0625 14:52:18.224747 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92af5f882bd1801e4f4ced21815b7dfe-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-39232a46a6\" (UID: \"92af5f882bd1801e4f4ced21815b7dfe\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:18.896055 kubelet[2883]: I0625 14:52:18.895993 2883 apiserver.go:52] "Watching apiserver" Jun 25 14:52:18.922828 kubelet[2883]: I0625 14:52:18.922753 2883 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:52:19.000951 kubelet[2883]: W0625 14:52:19.000909 2883 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:52:19.001132 kubelet[2883]: E0625 14:52:19.001013 2883 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3815.2.4-a-39232a46a6\" already exists" pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" Jun 25 14:52:19.060799 kubelet[2883]: I0625 14:52:19.060757 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.4-a-39232a46a6" podStartSLOduration=1.060694673 podStartE2EDuration="1.060694673s" podCreationTimestamp="2024-06-25 14:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:52:19.011438917 +0000 UTC m=+1.194087152" watchObservedRunningTime="2024-06-25 14:52:19.060694673 +0000 UTC m=+1.243342908" Jun 25 14:52:19.083509 kubelet[2883]: I0625 14:52:19.083449 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.4-a-39232a46a6" podStartSLOduration=1.083401283 podStartE2EDuration="1.083401283s" podCreationTimestamp="2024-06-25 14:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:52:19.061135879 +0000 UTC m=+1.243784114" watchObservedRunningTime="2024-06-25 14:52:19.083401283 +0000 UTC m=+1.266049518" Jun 25 14:52:19.834000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:19.840153 kernel: kauditd_printk_skb: 59 callbacks suppressed Jun 25 14:52:19.840322 kernel: audit: type=1400 audit(1719327139.834:349): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:19.834000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40010f3f80 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:19.904846 kernel: audit: type=1300 audit(1719327139.834:349): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40010f3f80 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:19.834000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:19.928693 kernel: audit: type=1327 audit(1719327139.834:349): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:19.865000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:19.949423 kernel: audit: type=1400 audit(1719327139.865:350): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:19.865000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=400117c140 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:19.982034 kernel: audit: type=1300 audit(1719327139.865:350): arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=400117c140 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:19.865000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:20.009373 kernel: audit: type=1327 audit(1719327139.865:350): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:19.868000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:20.033084 kernel: audit: type=1400 audit(1719327139.868:351): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:19.868000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400117c300 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:20.060008 kernel: audit: type=1300 audit(1719327139.868:351): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400117c300 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:19.868000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:20.089939 kernel: audit: type=1327 audit(1719327139.868:351): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:20.092259 kernel: audit: type=1400 audit(1719327139.870:352): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:19.870000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:52:19.870000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400117c4c0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:52:19.870000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:52:22.847110 sudo[2015]: pam_unix(sudo:session): session closed for user root Jun 25 14:52:22.845000 audit[2015]: USER_END pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:52:22.846000 audit[2015]: CRED_DISP pid=2015 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:52:22.931427 sshd[2012]: pam_unix(sshd:session): session closed for user core Jun 25 14:52:22.931000 audit[2012]: USER_END pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:52:22.931000 audit[2012]: CRED_DISP pid=2012 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:52:22.935538 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:52:22.936210 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:46158.service: Deactivated successfully. Jun 25 14:52:22.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.36:22-10.200.16.10:46158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:52:22.937024 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:52:22.937183 systemd[1]: session-9.scope: Consumed 7.283s CPU time. Jun 25 14:52:22.938494 systemd-logind[1480]: Removed session 9. Jun 25 14:52:23.063310 kubelet[2883]: I0625 14:52:23.063264 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" podStartSLOduration=5.063188373 podStartE2EDuration="5.063188373s" podCreationTimestamp="2024-06-25 14:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:52:19.084057292 +0000 UTC m=+1.266705527" watchObservedRunningTime="2024-06-25 14:52:23.063188373 +0000 UTC m=+5.245836608" Jun 25 14:52:30.550973 kubelet[2883]: I0625 14:52:30.550935 2883 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:52:30.551940 containerd[1520]: time="2024-06-25T14:52:30.551878888Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:52:30.552581 kubelet[2883]: I0625 14:52:30.552561 2883 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:52:31.355074 kubelet[2883]: I0625 14:52:31.355035 2883 topology_manager.go:215] "Topology Admit Handler" podUID="02379e0c-bc5b-452c-b8bc-8334c18a33df" podNamespace="kube-system" podName="kube-proxy-grxsw" Jun 25 14:52:31.360448 systemd[1]: Created slice kubepods-besteffort-pod02379e0c_bc5b_452c_b8bc_8334c18a33df.slice - libcontainer container kubepods-besteffort-pod02379e0c_bc5b_452c_b8bc_8334c18a33df.slice. Jun 25 14:52:31.452493 kubelet[2883]: I0625 14:52:31.452454 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02379e0c-bc5b-452c-b8bc-8334c18a33df-kube-proxy\") pod \"kube-proxy-grxsw\" (UID: \"02379e0c-bc5b-452c-b8bc-8334c18a33df\") " pod="kube-system/kube-proxy-grxsw" Jun 25 14:52:31.452651 kubelet[2883]: I0625 14:52:31.452513 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02379e0c-bc5b-452c-b8bc-8334c18a33df-lib-modules\") pod \"kube-proxy-grxsw\" (UID: \"02379e0c-bc5b-452c-b8bc-8334c18a33df\") " pod="kube-system/kube-proxy-grxsw" Jun 25 14:52:31.452651 kubelet[2883]: I0625 14:52:31.452534 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02379e0c-bc5b-452c-b8bc-8334c18a33df-xtables-lock\") pod \"kube-proxy-grxsw\" (UID: \"02379e0c-bc5b-452c-b8bc-8334c18a33df\") " pod="kube-system/kube-proxy-grxsw" Jun 25 14:52:31.452651 kubelet[2883]: I0625 14:52:31.452585 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzwn\" (UniqueName: \"kubernetes.io/projected/02379e0c-bc5b-452c-b8bc-8334c18a33df-kube-api-access-lqzwn\") pod \"kube-proxy-grxsw\" (UID: \"02379e0c-bc5b-452c-b8bc-8334c18a33df\") " pod="kube-system/kube-proxy-grxsw" Jun 25 14:52:31.608551 kubelet[2883]: I0625 14:52:31.608423 2883 topology_manager.go:215] "Topology Admit Handler" podUID="4afe6c72-cdf5-4281-959e-875606dd6572" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-w9m2r" Jun 25 14:52:31.614770 systemd[1]: Created slice kubepods-besteffort-pod4afe6c72_cdf5_4281_959e_875606dd6572.slice - libcontainer container kubepods-besteffort-pod4afe6c72_cdf5_4281_959e_875606dd6572.slice. Jun 25 14:52:31.669494 containerd[1520]: time="2024-06-25T14:52:31.669087099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grxsw,Uid:02379e0c-bc5b-452c-b8bc-8334c18a33df,Namespace:kube-system,Attempt:0,}" Jun 25 14:52:31.712930 containerd[1520]: time="2024-06-25T14:52:31.712488631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:52:31.713147 containerd[1520]: time="2024-06-25T14:52:31.712890716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:31.713147 containerd[1520]: time="2024-06-25T14:52:31.712908596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:52:31.713147 containerd[1520]: time="2024-06-25T14:52:31.712918996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:31.736465 systemd[1]: Started cri-containerd-cd96def41a0c423e241a15b90f0f762a922fe8701c7bd09573558bbb0dc67c54.scope - libcontainer container cd96def41a0c423e241a15b90f0f762a922fe8701c7bd09573558bbb0dc67c54. Jun 25 14:52:31.743000 audit: BPF prog-id=129 op=LOAD Jun 25 14:52:31.748913 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:52:31.749046 kernel: audit: type=1334 audit(1719327151.743:358): prog-id=129 op=LOAD Jun 25 14:52:31.755161 kubelet[2883]: I0625 14:52:31.755066 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4afe6c72-cdf5-4281-959e-875606dd6572-var-lib-calico\") pod \"tigera-operator-76c4974c85-w9m2r\" (UID: \"4afe6c72-cdf5-4281-959e-875606dd6572\") " pod="tigera-operator/tigera-operator-76c4974c85-w9m2r" Jun 25 14:52:31.755161 kubelet[2883]: I0625 14:52:31.755113 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kchkh\" (UniqueName: \"kubernetes.io/projected/4afe6c72-cdf5-4281-959e-875606dd6572-kube-api-access-kchkh\") pod \"tigera-operator-76c4974c85-w9m2r\" (UID: \"4afe6c72-cdf5-4281-959e-875606dd6572\") " pod="tigera-operator/tigera-operator-76c4974c85-w9m2r" Jun 25 14:52:31.743000 audit: BPF prog-id=130 op=LOAD Jun 25 14:52:31.760450 kernel: audit: type=1334 audit(1719327151.743:359): prog-id=130 op=LOAD Jun 25 14:52:31.743000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40002338b0 a2=78 a3=0 items=0 ppid=2970 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.782458 kernel: audit: type=1300 audit(1719327151.743:359): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40002338b0 a2=78 a3=0 items=0 ppid=2970 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393664656634316130633432336532343161313562393066306637 Jun 25 14:52:31.804608 kernel: audit: type=1327 audit(1719327151.743:359): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393664656634316130633432336532343161313562393066306637 Jun 25 14:52:31.748000 audit: BPF prog-id=131 op=LOAD Jun 25 14:52:31.811987 kernel: audit: type=1334 audit(1719327151.748:360): prog-id=131 op=LOAD Jun 25 14:52:31.748000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000233640 a2=78 a3=0 items=0 ppid=2970 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.833879 kernel: audit: type=1300 audit(1719327151.748:360): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000233640 a2=78 a3=0 items=0 ppid=2970 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393664656634316130633432336532343161313562393066306637 Jun 25 14:52:31.862419 kernel: audit: type=1327 audit(1719327151.748:360): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393664656634316130633432336532343161313562393066306637 Jun 25 14:52:31.862572 kernel: audit: type=1334 audit(1719327151.748:361): prog-id=131 op=UNLOAD Jun 25 14:52:31.748000 audit: BPF prog-id=131 op=UNLOAD Jun 25 14:52:31.748000 audit: BPF prog-id=130 op=UNLOAD Jun 25 14:52:31.865284 containerd[1520]: time="2024-06-25T14:52:31.865245564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grxsw,Uid:02379e0c-bc5b-452c-b8bc-8334c18a33df,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd96def41a0c423e241a15b90f0f762a922fe8701c7bd09573558bbb0dc67c54\"" Jun 25 14:52:31.748000 audit: BPF prog-id=132 op=LOAD Jun 25 14:52:31.876198 kernel: audit: type=1334 audit(1719327151.748:362): prog-id=130 op=UNLOAD Jun 25 14:52:31.876380 kernel: audit: type=1334 audit(1719327151.748:363): prog-id=132 op=LOAD Jun 25 14:52:31.748000 audit[2979]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000233b10 a2=78 a3=0 items=0 ppid=2970 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364393664656634316130633432336532343161313562393066306637 Jun 25 14:52:31.877075 containerd[1520]: time="2024-06-25T14:52:31.877038657Z" level=info msg="CreateContainer within sandbox \"cd96def41a0c423e241a15b90f0f762a922fe8701c7bd09573558bbb0dc67c54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:52:31.917196 containerd[1520]: time="2024-06-25T14:52:31.917145472Z" level=info msg="CreateContainer within sandbox \"cd96def41a0c423e241a15b90f0f762a922fe8701c7bd09573558bbb0dc67c54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ed89c0f95066b6bad0bf1fa7f67810a264aa3290fbadf08932cdc9c1ed50f8c\"" Jun 25 14:52:31.919485 containerd[1520]: time="2024-06-25T14:52:31.918605929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-w9m2r,Uid:4afe6c72-cdf5-4281-959e-875606dd6572,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:52:31.919485 containerd[1520]: time="2024-06-25T14:52:31.919258856Z" level=info msg="StartContainer for \"9ed89c0f95066b6bad0bf1fa7f67810a264aa3290fbadf08932cdc9c1ed50f8c\"" Jun 25 14:52:31.946445 systemd[1]: Started cri-containerd-9ed89c0f95066b6bad0bf1fa7f67810a264aa3290fbadf08932cdc9c1ed50f8c.scope - libcontainer container 9ed89c0f95066b6bad0bf1fa7f67810a264aa3290fbadf08932cdc9c1ed50f8c. Jun 25 14:52:31.961000 audit: BPF prog-id=133 op=LOAD Jun 25 14:52:31.961000 audit[3012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=2970 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965643839633066393530363662366261643062663166613766363738 Jun 25 14:52:31.961000 audit: BPF prog-id=134 op=LOAD Jun 25 14:52:31.961000 audit[3012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=2970 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965643839633066393530363662366261643062663166613766363738 Jun 25 14:52:31.961000 audit: BPF prog-id=134 op=UNLOAD Jun 25 14:52:31.961000 audit: BPF prog-id=133 op=UNLOAD Jun 25 14:52:31.961000 audit: BPF prog-id=135 op=LOAD Jun 25 14:52:31.961000 audit[3012]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=2970 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:31.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965643839633066393530363662366261643062663166613766363738 Jun 25 14:52:31.969214 containerd[1520]: time="2024-06-25T14:52:31.969128102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:52:31.969404 containerd[1520]: time="2024-06-25T14:52:31.969181942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:31.969404 containerd[1520]: time="2024-06-25T14:52:31.969204383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:52:31.969404 containerd[1520]: time="2024-06-25T14:52:31.969214823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:31.991474 systemd[1]: Started cri-containerd-438f9b5ae8e783471af4912fed90d9054ef72302c02c38976cb09966bf0863a2.scope - libcontainer container 438f9b5ae8e783471af4912fed90d9054ef72302c02c38976cb09966bf0863a2. Jun 25 14:52:32.004000 audit: BPF prog-id=136 op=LOAD Jun 25 14:52:32.006043 containerd[1520]: time="2024-06-25T14:52:32.005989239Z" level=info msg="StartContainer for \"9ed89c0f95066b6bad0bf1fa7f67810a264aa3290fbadf08932cdc9c1ed50f8c\" returns successfully" Jun 25 14:52:32.005000 audit: BPF prog-id=137 op=LOAD Jun 25 14:52:32.005000 audit[3052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3037 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386639623561653865373833343731616634393132666564393064 Jun 25 14:52:32.005000 audit: BPF prog-id=138 op=LOAD Jun 25 14:52:32.005000 audit[3052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3037 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386639623561653865373833343731616634393132666564393064 Jun 25 14:52:32.005000 audit: BPF prog-id=138 op=UNLOAD Jun 25 14:52:32.005000 audit: BPF prog-id=137 op=UNLOAD Jun 25 14:52:32.005000 audit: BPF prog-id=139 op=LOAD Jun 25 14:52:32.005000 audit[3052]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3037 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433386639623561653865373833343731616634393132666564393064 Jun 25 14:52:32.022757 kubelet[2883]: I0625 14:52:32.022713 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-grxsw" podStartSLOduration=1.022669424 podStartE2EDuration="1.022669424s" podCreationTimestamp="2024-06-25 14:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:52:32.022495503 +0000 UTC m=+14.205143738" watchObservedRunningTime="2024-06-25 14:52:32.022669424 +0000 UTC m=+14.205317659" Jun 25 14:52:32.039470 containerd[1520]: time="2024-06-25T14:52:32.039410611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-w9m2r,Uid:4afe6c72-cdf5-4281-959e-875606dd6572,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"438f9b5ae8e783471af4912fed90d9054ef72302c02c38976cb09966bf0863a2\"" Jun 25 14:52:32.043851 containerd[1520]: time="2024-06-25T14:52:32.043798980Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:52:32.084000 audit[3103]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.084000 audit[3103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe9d6590 a2=0 a3=1 items=0 ppid=3023 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:52:32.086000 audit[3104]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.086000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdbafd3f0 a2=0 a3=1 items=0 ppid=3023 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:52:32.090000 audit[3107]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.090000 audit[3107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff9aba30 a2=0 a3=1 items=0 ppid=3023 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:52:32.091000 audit[3108]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.091000 audit[3108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd410a280 a2=0 a3=1 items=0 ppid=3023 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:52:32.092000 audit[3106]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.092000 audit[3106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc5405e0 a2=0 a3=1 items=0 ppid=3023 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:52:32.093000 audit[3109]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.093000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeaf3d160 a2=0 a3=1 items=0 ppid=3023 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:52:32.185000 audit[3110]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.185000 audit[3110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd47263b0 a2=0 a3=1 items=0 ppid=3023 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:52:32.189000 audit[3112]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.189000 audit[3112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff5e28f00 a2=0 a3=1 items=0 ppid=3023 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:52:32.194000 audit[3115]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.194000 audit[3115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcbae70f0 a2=0 a3=1 items=0 ppid=3023 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:52:32.195000 audit[3116]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.195000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed7a3f40 a2=0 a3=1 items=0 ppid=3023 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:52:32.199000 audit[3118]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.199000 audit[3118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdfd3d880 a2=0 a3=1 items=0 ppid=3023 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:52:32.200000 audit[3119]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.200000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebbe3a00 a2=0 a3=1 items=0 ppid=3023 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:52:32.203000 audit[3121]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.203000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd4121310 a2=0 a3=1 items=0 ppid=3023 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:52:32.207000 audit[3124]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.207000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffebab1d90 a2=0 a3=1 items=0 ppid=3023 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:52:32.208000 audit[3125]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.208000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6f4cca0 a2=0 a3=1 items=0 ppid=3023 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:52:32.211000 audit[3127]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.211000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcac12520 a2=0 a3=1 items=0 ppid=3023 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:52:32.213000 audit[3128]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.213000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc03ad210 a2=0 a3=1 items=0 ppid=3023 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:52:32.217000 audit[3130]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.217000 audit[3130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffff1fb2b0 a2=0 a3=1 items=0 ppid=3023 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:52:32.225000 audit[3133]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.225000 audit[3133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff6614ca0 a2=0 a3=1 items=0 ppid=3023 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:52:32.229000 audit[3136]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.229000 audit[3136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc587d190 a2=0 a3=1 items=0 ppid=3023 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:52:32.230000 audit[3137]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.230000 audit[3137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffca66930 a2=0 a3=1 items=0 ppid=3023 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:52:32.234000 audit[3139]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.234000 audit[3139]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdc3421c0 a2=0 a3=1 items=0 ppid=3023 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:52:32.237000 audit[3142]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.237000 audit[3142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd7d3f240 a2=0 a3=1 items=0 ppid=3023 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:52:32.239000 audit[3143]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.239000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc306abf0 a2=0 a3=1 items=0 ppid=3023 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:52:32.242000 audit[3145]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:52:32.242000 audit[3145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffcba5330 a2=0 a3=1 items=0 ppid=3023 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:52:32.276000 audit[3151]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:32.276000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff188bd00 a2=0 a3=1 items=0 ppid=3023 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:32.292000 audit[3151]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:32.292000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff188bd00 a2=0 a3=1 items=0 ppid=3023 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:32.294000 audit[3156]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.294000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe351de30 a2=0 a3=1 items=0 ppid=3023 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:52:32.297000 audit[3158]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.297000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff330c880 a2=0 a3=1 items=0 ppid=3023 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.297000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:52:32.302000 audit[3161]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.302000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc66f1300 a2=0 a3=1 items=0 ppid=3023 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:52:32.303000 audit[3162]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.303000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc626d890 a2=0 a3=1 items=0 ppid=3023 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:52:32.306000 audit[3164]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.306000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe4e643d0 a2=0 a3=1 items=0 ppid=3023 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.306000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:52:32.307000 audit[3165]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.307000 audit[3165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe97ade50 a2=0 a3=1 items=0 ppid=3023 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.307000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:52:32.310000 audit[3167]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.310000 audit[3167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff777f800 a2=0 a3=1 items=0 ppid=3023 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.310000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:52:32.314000 audit[3170]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.314000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe97bf050 a2=0 a3=1 items=0 ppid=3023 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:52:32.315000 audit[3171]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.315000 audit[3171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9c202f0 a2=0 a3=1 items=0 ppid=3023 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:52:32.318000 audit[3173]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.318000 audit[3173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe49611f0 a2=0 a3=1 items=0 ppid=3023 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:52:32.319000 audit[3174]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.319000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0777650 a2=0 a3=1 items=0 ppid=3023 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:52:32.324000 audit[3176]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.324000 audit[3176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcaf2f270 a2=0 a3=1 items=0 ppid=3023 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:52:32.329000 audit[3179]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.329000 audit[3179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffa9f8190 a2=0 a3=1 items=0 ppid=3023 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:52:32.333000 audit[3182]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.333000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7d757c0 a2=0 a3=1 items=0 ppid=3023 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:52:32.335000 audit[3183]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.335000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffd5cbc10 a2=0 a3=1 items=0 ppid=3023 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:52:32.337000 audit[3185]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.337000 audit[3185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe646e340 a2=0 a3=1 items=0 ppid=3023 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:52:32.341000 audit[3188]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.341000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd4fd4cf0 a2=0 a3=1 items=0 ppid=3023 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:52:32.342000 audit[3189]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.342000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb180e20 a2=0 a3=1 items=0 ppid=3023 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:52:32.345000 audit[3191]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.345000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe1d7ac80 a2=0 a3=1 items=0 ppid=3023 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:52:32.346000 audit[3192]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.346000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9987a70 a2=0 a3=1 items=0 ppid=3023 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:52:32.349000 audit[3194]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.349000 audit[3194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc5801ee0 a2=0 a3=1 items=0 ppid=3023 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:52:32.353000 audit[3197]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:52:32.353000 audit[3197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd8e0d010 a2=0 a3=1 items=0 ppid=3023 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:52:32.356000 audit[3199]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:52:32.356000 audit[3199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffd4746850 a2=0 a3=1 items=0 ppid=3023 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.356000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:32.357000 audit[3199]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:52:32.357000 audit[3199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd4746850 a2=0 a3=1 items=0 ppid=3023 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:32.357000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:32.586493 systemd[1]: run-containerd-runc-k8s.io-cd96def41a0c423e241a15b90f0f762a922fe8701c7bd09573558bbb0dc67c54-runc.R4AvgS.mount: Deactivated successfully. Jun 25 14:52:33.491207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831253407.mount: Deactivated successfully. Jun 25 14:52:33.933399 containerd[1520]: time="2024-06-25T14:52:33.933345010Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:33.936602 containerd[1520]: time="2024-06-25T14:52:33.936551485Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473650" Jun 25 14:52:33.944260 containerd[1520]: time="2024-06-25T14:52:33.944201889Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:33.948343 containerd[1520]: time="2024-06-25T14:52:33.948292734Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:33.952110 containerd[1520]: time="2024-06-25T14:52:33.952053135Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:33.953260 containerd[1520]: time="2024-06-25T14:52:33.953186827Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.909334287s" Jun 25 14:52:33.953260 containerd[1520]: time="2024-06-25T14:52:33.953261908Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:52:33.956842 containerd[1520]: time="2024-06-25T14:52:33.956319301Z" level=info msg="CreateContainer within sandbox \"438f9b5ae8e783471af4912fed90d9054ef72302c02c38976cb09966bf0863a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:52:33.979194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310504962.mount: Deactivated successfully. Jun 25 14:52:33.983934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571482462.mount: Deactivated successfully. Jun 25 14:52:33.997015 containerd[1520]: time="2024-06-25T14:52:33.996956745Z" level=info msg="CreateContainer within sandbox \"438f9b5ae8e783471af4912fed90d9054ef72302c02c38976cb09966bf0863a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24\"" Jun 25 14:52:33.999109 containerd[1520]: time="2024-06-25T14:52:33.997666273Z" level=info msg="StartContainer for \"4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24\"" Jun 25 14:52:34.025474 systemd[1]: Started cri-containerd-4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24.scope - libcontainer container 4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24. Jun 25 14:52:34.033000 audit: BPF prog-id=140 op=LOAD Jun 25 14:52:34.034000 audit: BPF prog-id=141 op=LOAD Jun 25 14:52:34.034000 audit[3216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=3037 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:34.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462636435653430643639333031363764623435663239663336343466 Jun 25 14:52:34.034000 audit: BPF prog-id=142 op=LOAD Jun 25 14:52:34.034000 audit[3216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=3037 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:34.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462636435653430643639333031363764623435663239663336343466 Jun 25 14:52:34.034000 audit: BPF prog-id=142 op=UNLOAD Jun 25 14:52:34.034000 audit: BPF prog-id=141 op=UNLOAD Jun 25 14:52:34.034000 audit: BPF prog-id=143 op=LOAD Jun 25 14:52:34.034000 audit[3216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=3037 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:34.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462636435653430643639333031363764623435663239663336343466 Jun 25 14:52:34.054537 containerd[1520]: time="2024-06-25T14:52:34.054484402Z" level=info msg="StartContainer for \"4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24\" returns successfully" Jun 25 14:52:37.597000 audit[3249]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.603293 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 14:52:37.603437 kernel: audit: type=1325 audit(1719327157.597:432): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.597000 audit[3249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff9956bc0 a2=0 a3=1 items=0 ppid=3023 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:37.641141 kernel: audit: type=1300 audit(1719327157.597:432): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff9956bc0 a2=0 a3=1 items=0 ppid=3023 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:37.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:37.654418 kernel: audit: type=1327 audit(1719327157.597:432): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:37.616000 audit[3249]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.616000 audit[3249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9956bc0 a2=0 a3=1 items=0 ppid=3023 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:37.695630 kernel: audit: type=1325 audit(1719327157.616:433): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.695753 kernel: audit: type=1300 audit(1719327157.616:433): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9956bc0 a2=0 a3=1 items=0 ppid=3023 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:37.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:37.708924 kernel: audit: type=1327 audit(1719327157.616:433): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:37.665000 audit[3251]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.727025 kernel: audit: type=1325 audit(1719327157.665:434): table=filter:94 family=2 entries=16 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.665000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe7356f30 a2=0 a3=1 items=0 ppid=3023 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:37.753107 kernel: audit: type=1300 audit(1719327157.665:434): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe7356f30 a2=0 a3=1 items=0 ppid=3023 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:37.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:37.766255 kernel: audit: type=1327 audit(1719327157.665:434): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:37.696000 audit[3251]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.779496 kernel: audit: type=1325 audit(1719327157.696:435): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:37.696000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7356f30 a2=0 a3=1 items=0 ppid=3023 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:37.696000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:37.795963 kubelet[2883]: I0625 14:52:37.795904 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-w9m2r" podStartSLOduration=4.882883944 podStartE2EDuration="6.795859631s" podCreationTimestamp="2024-06-25 14:52:31 +0000 UTC" firstStartedPulling="2024-06-25 14:52:32.040607824 +0000 UTC m=+14.223256019" lastFinishedPulling="2024-06-25 14:52:33.953583471 +0000 UTC m=+16.136231706" observedRunningTime="2024-06-25 14:52:35.027853869 +0000 UTC m=+17.210502104" watchObservedRunningTime="2024-06-25 14:52:37.795859631 +0000 UTC m=+19.978507866" Jun 25 14:52:37.796318 kubelet[2883]: I0625 14:52:37.796053 2883 topology_manager.go:215] "Topology Admit Handler" podUID="76834308-3c3b-4337-ba43-680d02a490f4" podNamespace="calico-system" podName="calico-typha-5cfd97c569-f8b6v" Jun 25 14:52:37.801338 systemd[1]: Created slice kubepods-besteffort-pod76834308_3c3b_4337_ba43_680d02a490f4.slice - libcontainer container kubepods-besteffort-pod76834308_3c3b_4337_ba43_680d02a490f4.slice. Jun 25 14:52:37.895182 kubelet[2883]: I0625 14:52:37.895067 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ggmb\" (UniqueName: \"kubernetes.io/projected/76834308-3c3b-4337-ba43-680d02a490f4-kube-api-access-6ggmb\") pod \"calico-typha-5cfd97c569-f8b6v\" (UID: \"76834308-3c3b-4337-ba43-680d02a490f4\") " pod="calico-system/calico-typha-5cfd97c569-f8b6v" Jun 25 14:52:37.895182 kubelet[2883]: I0625 14:52:37.895117 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/76834308-3c3b-4337-ba43-680d02a490f4-typha-certs\") pod \"calico-typha-5cfd97c569-f8b6v\" (UID: \"76834308-3c3b-4337-ba43-680d02a490f4\") " pod="calico-system/calico-typha-5cfd97c569-f8b6v" Jun 25 14:52:37.895182 kubelet[2883]: I0625 14:52:37.895142 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76834308-3c3b-4337-ba43-680d02a490f4-tigera-ca-bundle\") pod \"calico-typha-5cfd97c569-f8b6v\" (UID: \"76834308-3c3b-4337-ba43-680d02a490f4\") " pod="calico-system/calico-typha-5cfd97c569-f8b6v" Jun 25 14:52:37.910751 kubelet[2883]: I0625 14:52:37.910714 2883 topology_manager.go:215] "Topology Admit Handler" podUID="dde68ec8-4291-486e-a2b4-4cf7ce3816e5" podNamespace="calico-system" podName="calico-node-nfv4j" Jun 25 14:52:37.915866 systemd[1]: Created slice kubepods-besteffort-poddde68ec8_4291_486e_a2b4_4cf7ce3816e5.slice - libcontainer container kubepods-besteffort-poddde68ec8_4291_486e_a2b4_4cf7ce3816e5.slice. Jun 25 14:52:37.996040 kubelet[2883]: I0625 14:52:37.996003 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm65g\" (UniqueName: \"kubernetes.io/projected/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-kube-api-access-hm65g\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.996314 kubelet[2883]: I0625 14:52:37.996298 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-lib-modules\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.996435 kubelet[2883]: I0625 14:52:37.996425 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-node-certs\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.996610 kubelet[2883]: I0625 14:52:37.996565 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-policysync\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.996720 kubelet[2883]: I0625 14:52:37.996709 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-cni-log-dir\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.997297 kubelet[2883]: I0625 14:52:37.996848 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-cni-net-dir\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.997555 kubelet[2883]: I0625 14:52:37.997542 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-xtables-lock\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.997664 kubelet[2883]: I0625 14:52:37.997653 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-tigera-ca-bundle\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.997770 kubelet[2883]: I0625 14:52:37.997759 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-var-run-calico\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.997864 kubelet[2883]: I0625 14:52:37.997854 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-cni-bin-dir\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.997949 kubelet[2883]: I0625 14:52:37.997940 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-flexvol-driver-host\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:37.998052 kubelet[2883]: I0625 14:52:37.998042 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dde68ec8-4291-486e-a2b4-4cf7ce3816e5-var-lib-calico\") pod \"calico-node-nfv4j\" (UID: \"dde68ec8-4291-486e-a2b4-4cf7ce3816e5\") " pod="calico-system/calico-node-nfv4j" Jun 25 14:52:38.037677 kubelet[2883]: I0625 14:52:38.037620 2883 topology_manager.go:215] "Topology Admit Handler" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" podNamespace="calico-system" podName="csi-node-driver-rstqt" Jun 25 14:52:38.037913 kubelet[2883]: E0625 14:52:38.037882 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rstqt" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" Jun 25 14:52:38.100247 kubelet[2883]: E0625 14:52:38.100194 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.100247 kubelet[2883]: W0625 14:52:38.100219 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.100402 kubelet[2883]: E0625 14:52:38.100327 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.100527 kubelet[2883]: E0625 14:52:38.100509 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.100527 kubelet[2883]: W0625 14:52:38.100522 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.100608 kubelet[2883]: E0625 14:52:38.100539 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.100713 kubelet[2883]: E0625 14:52:38.100693 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.100713 kubelet[2883]: W0625 14:52:38.100705 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.100713 kubelet[2883]: E0625 14:52:38.100720 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.100932 kubelet[2883]: E0625 14:52:38.100916 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.100932 kubelet[2883]: W0625 14:52:38.100930 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.101021 kubelet[2883]: E0625 14:52:38.100945 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.101127 kubelet[2883]: E0625 14:52:38.101111 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.101127 kubelet[2883]: W0625 14:52:38.101123 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.101207 kubelet[2883]: E0625 14:52:38.101136 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.101310 kubelet[2883]: E0625 14:52:38.101295 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.101310 kubelet[2883]: W0625 14:52:38.101307 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.101389 kubelet[2883]: E0625 14:52:38.101331 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.101501 kubelet[2883]: E0625 14:52:38.101479 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.101501 kubelet[2883]: W0625 14:52:38.101493 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.101618 kubelet[2883]: E0625 14:52:38.101598 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.101710 kubelet[2883]: E0625 14:52:38.101638 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.101771 kubelet[2883]: W0625 14:52:38.101757 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.101947 kubelet[2883]: E0625 14:52:38.101922 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.102094 kubelet[2883]: E0625 14:52:38.102080 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.102162 kubelet[2883]: W0625 14:52:38.102150 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.102276 kubelet[2883]: E0625 14:52:38.102255 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.102539 kubelet[2883]: E0625 14:52:38.102525 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.102615 kubelet[2883]: W0625 14:52:38.102603 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.102716 kubelet[2883]: E0625 14:52:38.102689 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.102933 kubelet[2883]: E0625 14:52:38.102918 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.103006 kubelet[2883]: W0625 14:52:38.102994 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.103109 kubelet[2883]: E0625 14:52:38.103083 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.104463 kubelet[2883]: E0625 14:52:38.104436 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.104584 kubelet[2883]: W0625 14:52:38.104568 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.104699 kubelet[2883]: E0625 14:52:38.104668 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.104966 kubelet[2883]: E0625 14:52:38.104951 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.105050 kubelet[2883]: W0625 14:52:38.105037 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.105833 containerd[1520]: time="2024-06-25T14:52:38.105784918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cfd97c569-f8b6v,Uid:76834308-3c3b-4337-ba43-680d02a490f4,Namespace:calico-system,Attempt:0,}" Jun 25 14:52:38.109319 kubelet[2883]: E0625 14:52:38.106355 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.109581 kubelet[2883]: E0625 14:52:38.109561 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.109657 kubelet[2883]: W0625 14:52:38.109640 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.109783 kubelet[2883]: E0625 14:52:38.109746 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.110053 kubelet[2883]: E0625 14:52:38.110039 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.110139 kubelet[2883]: W0625 14:52:38.110127 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.110260 kubelet[2883]: E0625 14:52:38.110212 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.110497 kubelet[2883]: E0625 14:52:38.110483 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.110568 kubelet[2883]: W0625 14:52:38.110557 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.110663 kubelet[2883]: E0625 14:52:38.110636 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.110887 kubelet[2883]: E0625 14:52:38.110874 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.110973 kubelet[2883]: W0625 14:52:38.110960 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.111084 kubelet[2883]: E0625 14:52:38.111055 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.113418 kubelet[2883]: E0625 14:52:38.113393 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.113546 kubelet[2883]: W0625 14:52:38.113530 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.113686 kubelet[2883]: E0625 14:52:38.113645 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.113970 kubelet[2883]: E0625 14:52:38.113956 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.114074 kubelet[2883]: W0625 14:52:38.114060 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.114181 kubelet[2883]: E0625 14:52:38.114163 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.114491 kubelet[2883]: E0625 14:52:38.114477 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.114594 kubelet[2883]: W0625 14:52:38.114581 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.114785 kubelet[2883]: E0625 14:52:38.114765 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.115111 kubelet[2883]: E0625 14:52:38.115089 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.115206 kubelet[2883]: W0625 14:52:38.115192 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.117316 kubelet[2883]: E0625 14:52:38.116999 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.117316 kubelet[2883]: E0625 14:52:38.117208 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.117316 kubelet[2883]: W0625 14:52:38.117217 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.117316 kubelet[2883]: E0625 14:52:38.117323 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.117502 kubelet[2883]: E0625 14:52:38.117433 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.117502 kubelet[2883]: W0625 14:52:38.117440 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.117502 kubelet[2883]: E0625 14:52:38.117500 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.117653 kubelet[2883]: E0625 14:52:38.117628 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.117653 kubelet[2883]: W0625 14:52:38.117643 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.117753 kubelet[2883]: E0625 14:52:38.117735 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.117894 kubelet[2883]: E0625 14:52:38.117843 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.117894 kubelet[2883]: W0625 14:52:38.117854 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.123278 kubelet[2883]: E0625 14:52:38.122035 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.123278 kubelet[2883]: E0625 14:52:38.122510 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.123278 kubelet[2883]: W0625 14:52:38.122522 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.123278 kubelet[2883]: E0625 14:52:38.122544 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.123278 kubelet[2883]: E0625 14:52:38.122747 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.123278 kubelet[2883]: W0625 14:52:38.122756 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.123278 kubelet[2883]: E0625 14:52:38.122771 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.123278 kubelet[2883]: E0625 14:52:38.122914 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.123278 kubelet[2883]: W0625 14:52:38.122921 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.123278 kubelet[2883]: E0625 14:52:38.122934 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.123590 kubelet[2883]: E0625 14:52:38.123073 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.123590 kubelet[2883]: W0625 14:52:38.123080 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.123590 kubelet[2883]: E0625 14:52:38.123090 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.124314 kubelet[2883]: E0625 14:52:38.124211 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.124314 kubelet[2883]: W0625 14:52:38.124310 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.124442 kubelet[2883]: E0625 14:52:38.124329 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.131684 kubelet[2883]: E0625 14:52:38.131649 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.131684 kubelet[2883]: W0625 14:52:38.131674 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.131851 kubelet[2883]: E0625 14:52:38.131696 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.131851 kubelet[2883]: E0625 14:52:38.131830 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.131851 kubelet[2883]: W0625 14:52:38.131836 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.131851 kubelet[2883]: E0625 14:52:38.131846 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.131981 kubelet[2883]: E0625 14:52:38.131961 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.131981 kubelet[2883]: W0625 14:52:38.131973 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.132065 kubelet[2883]: E0625 14:52:38.131984 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.132119 kubelet[2883]: E0625 14:52:38.132100 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.132119 kubelet[2883]: W0625 14:52:38.132111 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.132178 kubelet[2883]: E0625 14:52:38.132122 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.132309 kubelet[2883]: E0625 14:52:38.132280 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.132309 kubelet[2883]: W0625 14:52:38.132293 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.132309 kubelet[2883]: E0625 14:52:38.132304 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.132447 kubelet[2883]: E0625 14:52:38.132428 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.132447 kubelet[2883]: W0625 14:52:38.132441 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.132507 kubelet[2883]: E0625 14:52:38.132454 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.132582 kubelet[2883]: E0625 14:52:38.132566 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.132582 kubelet[2883]: W0625 14:52:38.132577 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.132647 kubelet[2883]: E0625 14:52:38.132586 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.132718 kubelet[2883]: E0625 14:52:38.132700 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.132718 kubelet[2883]: W0625 14:52:38.132712 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.132783 kubelet[2883]: E0625 14:52:38.132723 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.132916 kubelet[2883]: E0625 14:52:38.132871 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.132916 kubelet[2883]: W0625 14:52:38.132883 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.132916 kubelet[2883]: E0625 14:52:38.132894 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.133138 kubelet[2883]: E0625 14:52:38.133119 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.133138 kubelet[2883]: W0625 14:52:38.133136 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.133206 kubelet[2883]: E0625 14:52:38.133150 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.133344 kubelet[2883]: E0625 14:52:38.133314 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.133344 kubelet[2883]: W0625 14:52:38.133328 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.133415 kubelet[2883]: E0625 14:52:38.133347 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.133491 kubelet[2883]: E0625 14:52:38.133474 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.133491 kubelet[2883]: W0625 14:52:38.133486 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.133543 kubelet[2883]: E0625 14:52:38.133497 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.133652 kubelet[2883]: E0625 14:52:38.133633 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.133652 kubelet[2883]: W0625 14:52:38.133645 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.133713 kubelet[2883]: E0625 14:52:38.133655 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.133788 kubelet[2883]: E0625 14:52:38.133769 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.133788 kubelet[2883]: W0625 14:52:38.133780 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.133848 kubelet[2883]: E0625 14:52:38.133793 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.133925 kubelet[2883]: E0625 14:52:38.133904 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.133925 kubelet[2883]: W0625 14:52:38.133920 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.134021 kubelet[2883]: E0625 14:52:38.133930 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.134329 kubelet[2883]: E0625 14:52:38.134307 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.134329 kubelet[2883]: W0625 14:52:38.134324 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.134421 kubelet[2883]: E0625 14:52:38.134338 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.134640 kubelet[2883]: E0625 14:52:38.134617 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.134640 kubelet[2883]: W0625 14:52:38.134631 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.134640 kubelet[2883]: E0625 14:52:38.134644 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.134817 kubelet[2883]: E0625 14:52:38.134792 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.134817 kubelet[2883]: W0625 14:52:38.134807 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.134817 kubelet[2883]: E0625 14:52:38.134819 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.134959 kubelet[2883]: E0625 14:52:38.134940 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.134959 kubelet[2883]: W0625 14:52:38.134952 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.135054 kubelet[2883]: E0625 14:52:38.134963 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.135101 kubelet[2883]: E0625 14:52:38.135080 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.135101 kubelet[2883]: W0625 14:52:38.135092 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.135101 kubelet[2883]: E0625 14:52:38.135103 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.164756 kubelet[2883]: E0625 14:52:38.158413 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.164756 kubelet[2883]: W0625 14:52:38.158440 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.164756 kubelet[2883]: E0625 14:52:38.158467 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.171802 containerd[1520]: time="2024-06-25T14:52:38.171699576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:52:38.171982 containerd[1520]: time="2024-06-25T14:52:38.171759776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:38.172091 containerd[1520]: time="2024-06-25T14:52:38.172029699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:52:38.172197 containerd[1520]: time="2024-06-25T14:52:38.172069579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:38.200932 systemd[1]: Started cri-containerd-124841eaff29e1b73b0b6b036b20ddf85f2798c961520e810d6600935078cab5.scope - libcontainer container 124841eaff29e1b73b0b6b036b20ddf85f2798c961520e810d6600935078cab5. Jun 25 14:52:38.202259 kubelet[2883]: E0625 14:52:38.202217 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.202259 kubelet[2883]: W0625 14:52:38.202252 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.202399 kubelet[2883]: E0625 14:52:38.202278 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.202399 kubelet[2883]: I0625 14:52:38.202317 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3-kubelet-dir\") pod \"csi-node-driver-rstqt\" (UID: \"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3\") " pod="calico-system/csi-node-driver-rstqt" Jun 25 14:52:38.202598 kubelet[2883]: E0625 14:52:38.202575 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.202598 kubelet[2883]: W0625 14:52:38.202591 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.202664 kubelet[2883]: E0625 14:52:38.202617 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.202664 kubelet[2883]: I0625 14:52:38.202638 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3-varrun\") pod \"csi-node-driver-rstqt\" (UID: \"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3\") " pod="calico-system/csi-node-driver-rstqt" Jun 25 14:52:38.204608 kubelet[2883]: E0625 14:52:38.204565 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.204608 kubelet[2883]: W0625 14:52:38.204590 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.204608 kubelet[2883]: E0625 14:52:38.204613 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.204771 kubelet[2883]: I0625 14:52:38.204643 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3-socket-dir\") pod \"csi-node-driver-rstqt\" (UID: \"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3\") " pod="calico-system/csi-node-driver-rstqt" Jun 25 14:52:38.207398 kubelet[2883]: E0625 14:52:38.207355 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.207398 kubelet[2883]: W0625 14:52:38.207384 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.207674 kubelet[2883]: E0625 14:52:38.207571 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.207674 kubelet[2883]: I0625 14:52:38.207610 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvjc\" (UniqueName: \"kubernetes.io/projected/e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3-kube-api-access-6cvjc\") pod \"csi-node-driver-rstqt\" (UID: \"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3\") " pod="calico-system/csi-node-driver-rstqt" Jun 25 14:52:38.209110 kubelet[2883]: E0625 14:52:38.209062 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.209110 kubelet[2883]: W0625 14:52:38.209090 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.209421 kubelet[2883]: E0625 14:52:38.209275 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.209466 kubelet[2883]: E0625 14:52:38.209453 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.209501 kubelet[2883]: W0625 14:52:38.209469 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.209635 kubelet[2883]: E0625 14:52:38.209568 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.209774 kubelet[2883]: E0625 14:52:38.209750 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.209774 kubelet[2883]: W0625 14:52:38.209769 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.209937 kubelet[2883]: E0625 14:52:38.209868 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.210092 kubelet[2883]: E0625 14:52:38.210069 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.210092 kubelet[2883]: W0625 14:52:38.210084 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.210279 kubelet[2883]: E0625 14:52:38.210193 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.210279 kubelet[2883]: I0625 14:52:38.210252 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3-registration-dir\") pod \"csi-node-driver-rstqt\" (UID: \"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3\") " pod="calico-system/csi-node-driver-rstqt" Jun 25 14:52:38.210677 kubelet[2883]: E0625 14:52:38.210645 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.210677 kubelet[2883]: W0625 14:52:38.210671 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.210845 kubelet[2883]: E0625 14:52:38.210770 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.213403 kubelet[2883]: E0625 14:52:38.213365 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.213403 kubelet[2883]: W0625 14:52:38.213389 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.213537 kubelet[2883]: E0625 14:52:38.213414 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.213751 kubelet[2883]: E0625 14:52:38.213720 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.213751 kubelet[2883]: W0625 14:52:38.213741 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.213819 kubelet[2883]: E0625 14:52:38.213762 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.213993 kubelet[2883]: E0625 14:52:38.213969 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.213993 kubelet[2883]: W0625 14:52:38.213984 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.214063 kubelet[2883]: E0625 14:52:38.214008 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.214306 kubelet[2883]: E0625 14:52:38.214223 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.214306 kubelet[2883]: W0625 14:52:38.214261 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.214306 kubelet[2883]: E0625 14:52:38.214275 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.214532 kubelet[2883]: E0625 14:52:38.214512 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.214532 kubelet[2883]: W0625 14:52:38.214524 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.214532 kubelet[2883]: E0625 14:52:38.214536 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.214771 kubelet[2883]: E0625 14:52:38.214747 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.214771 kubelet[2883]: W0625 14:52:38.214762 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.214771 kubelet[2883]: E0625 14:52:38.214775 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.219425 containerd[1520]: time="2024-06-25T14:52:38.219366331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nfv4j,Uid:dde68ec8-4291-486e-a2b4-4cf7ce3816e5,Namespace:calico-system,Attempt:0,}" Jun 25 14:52:38.227000 audit: BPF prog-id=144 op=LOAD Jun 25 14:52:38.228000 audit: BPF prog-id=145 op=LOAD Jun 25 14:52:38.228000 audit[3328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3317 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343834316561666632396531623733623062366230333662323064 Jun 25 14:52:38.228000 audit: BPF prog-id=146 op=LOAD Jun 25 14:52:38.228000 audit[3328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3317 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343834316561666632396531623733623062366230333662323064 Jun 25 14:52:38.228000 audit: BPF prog-id=146 op=UNLOAD Jun 25 14:52:38.228000 audit: BPF prog-id=145 op=UNLOAD Jun 25 14:52:38.229000 audit: BPF prog-id=147 op=LOAD Jun 25 14:52:38.229000 audit[3328]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3317 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343834316561666632396531623733623062366230333662323064 Jun 25 14:52:38.266440 containerd[1520]: time="2024-06-25T14:52:38.266329879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:52:38.266440 containerd[1520]: time="2024-06-25T14:52:38.266395400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:38.266440 containerd[1520]: time="2024-06-25T14:52:38.266410520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:52:38.266660 containerd[1520]: time="2024-06-25T14:52:38.266420640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:52:38.303474 systemd[1]: Started cri-containerd-865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17.scope - libcontainer container 865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17. Jun 25 14:52:38.308997 containerd[1520]: time="2024-06-25T14:52:38.308937304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cfd97c569-f8b6v,Uid:76834308-3c3b-4337-ba43-680d02a490f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"124841eaff29e1b73b0b6b036b20ddf85f2798c961520e810d6600935078cab5\"" Jun 25 14:52:38.313044 containerd[1520]: time="2024-06-25T14:52:38.312995745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:52:38.314693 kubelet[2883]: E0625 14:52:38.314426 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.314693 kubelet[2883]: W0625 14:52:38.314448 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.314693 kubelet[2883]: E0625 14:52:38.314470 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.315478 kubelet[2883]: E0625 14:52:38.315155 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.315478 kubelet[2883]: W0625 14:52:38.315185 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.315478 kubelet[2883]: E0625 14:52:38.315205 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.315973 kubelet[2883]: E0625 14:52:38.315737 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.315973 kubelet[2883]: W0625 14:52:38.315752 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.315973 kubelet[2883]: E0625 14:52:38.315775 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.316523 kubelet[2883]: E0625 14:52:38.316257 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.316523 kubelet[2883]: W0625 14:52:38.316285 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.316523 kubelet[2883]: E0625 14:52:38.316300 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.316947 kubelet[2883]: E0625 14:52:38.316818 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.316947 kubelet[2883]: W0625 14:52:38.316832 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.316947 kubelet[2883]: E0625 14:52:38.316857 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.317770 kubelet[2883]: E0625 14:52:38.317737 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.317770 kubelet[2883]: W0625 14:52:38.317762 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.317770 kubelet[2883]: E0625 14:52:38.317784 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.318319 kubelet[2883]: E0625 14:52:38.318283 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.318319 kubelet[2883]: W0625 14:52:38.318300 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.318319 kubelet[2883]: E0625 14:52:38.318318 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.319144 kubelet[2883]: E0625 14:52:38.319095 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.319144 kubelet[2883]: W0625 14:52:38.319117 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.319777 kubelet[2883]: E0625 14:52:38.319492 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.320408 kubelet[2883]: E0625 14:52:38.320379 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.320408 kubelet[2883]: W0625 14:52:38.320401 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.320657 kubelet[2883]: E0625 14:52:38.320538 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.321377 kubelet[2883]: E0625 14:52:38.321347 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.321377 kubelet[2883]: W0625 14:52:38.321370 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.321659 kubelet[2883]: E0625 14:52:38.321528 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.322383 kubelet[2883]: E0625 14:52:38.322345 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.322383 kubelet[2883]: W0625 14:52:38.322367 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.322596 kubelet[2883]: E0625 14:52:38.322490 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.323373 kubelet[2883]: E0625 14:52:38.323340 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.323373 kubelet[2883]: W0625 14:52:38.323366 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.323653 kubelet[2883]: E0625 14:52:38.323523 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.324404 kubelet[2883]: E0625 14:52:38.324361 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.324404 kubelet[2883]: W0625 14:52:38.324382 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.324519 kubelet[2883]: E0625 14:52:38.324507 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.325608 kubelet[2883]: E0625 14:52:38.325509 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.325762 kubelet[2883]: W0625 14:52:38.325734 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.325955 kubelet[2883]: E0625 14:52:38.325874 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.326358 kubelet[2883]: E0625 14:52:38.326327 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.326358 kubelet[2883]: W0625 14:52:38.326349 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.326617 kubelet[2883]: E0625 14:52:38.326505 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.327391 kubelet[2883]: E0625 14:52:38.327367 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.327391 kubelet[2883]: W0625 14:52:38.327385 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.327496 kubelet[2883]: E0625 14:52:38.327484 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.328449 kubelet[2883]: E0625 14:52:38.328424 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.328449 kubelet[2883]: W0625 14:52:38.328443 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.328568 kubelet[2883]: E0625 14:52:38.328547 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.329653 kubelet[2883]: E0625 14:52:38.329456 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.329653 kubelet[2883]: W0625 14:52:38.329474 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.329908 kubelet[2883]: E0625 14:52:38.329800 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.329908 kubelet[2883]: W0625 14:52:38.329815 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.329000 audit: BPF prog-id=148 op=LOAD Jun 25 14:52:38.330931 kubelet[2883]: E0625 14:52:38.330901 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.330985 kubelet[2883]: E0625 14:52:38.330954 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.330000 audit: BPF prog-id=149 op=LOAD Jun 25 14:52:38.330000 audit[3379]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.331922 kubelet[2883]: E0625 14:52:38.331707 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.331922 kubelet[2883]: W0625 14:52:38.331727 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.331922 kubelet[2883]: E0625 14:52:38.331873 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.332268 kubelet[2883]: E0625 14:52:38.332106 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.332268 kubelet[2883]: W0625 14:52:38.332124 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.332268 kubelet[2883]: E0625 14:52:38.332177 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.332480 kubelet[2883]: E0625 14:52:38.332468 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.332548 kubelet[2883]: W0625 14:52:38.332536 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.332653 kubelet[2883]: E0625 14:52:38.332626 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.332886 kubelet[2883]: E0625 14:52:38.332872 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.332969 kubelet[2883]: W0625 14:52:38.332957 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.333173 kubelet[2883]: E0625 14:52:38.333158 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.333375 kubelet[2883]: E0625 14:52:38.333363 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.333462 kubelet[2883]: W0625 14:52:38.333448 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.333682 kubelet[2883]: E0625 14:52:38.333645 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836356333656137373232316430616164663937393930323837653235 Jun 25 14:52:38.333944 kubelet[2883]: E0625 14:52:38.333929 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.334015 kubelet[2883]: W0625 14:52:38.334002 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.334089 kubelet[2883]: E0625 14:52:38.334077 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.332000 audit: BPF prog-id=150 op=LOAD Jun 25 14:52:38.332000 audit[3379]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836356333656137373232316430616164663937393930323837653235 Jun 25 14:52:38.333000 audit: BPF prog-id=150 op=UNLOAD Jun 25 14:52:38.333000 audit: BPF prog-id=149 op=UNLOAD Jun 25 14:52:38.333000 audit: BPF prog-id=151 op=LOAD Jun 25 14:52:38.333000 audit[3379]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836356333656137373232316430616164663937393930323837653235 Jun 25 14:52:38.350349 kubelet[2883]: E0625 14:52:38.350325 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:38.351384 kubelet[2883]: W0625 14:52:38.350924 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:38.351384 kubelet[2883]: E0625 14:52:38.350969 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:38.358679 containerd[1520]: time="2024-06-25T14:52:38.358638560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nfv4j,Uid:dde68ec8-4291-486e-a2b4-4cf7ce3816e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17\"" Jun 25 14:52:38.723000 audit[3433]: NETFILTER_CFG table=filter:96 family=2 entries=16 op=nft_register_rule pid=3433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:38.723000 audit[3433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffffa62920 a2=0 a3=1 items=0 ppid=3023 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:38.724000 audit[3433]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:52:38.724000 audit[3433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffffa62920 a2=0 a3=1 items=0 ppid=3023 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:38.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:52:39.937768 kubelet[2883]: E0625 14:52:39.937723 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rstqt" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" Jun 25 14:52:40.015297 containerd[1520]: time="2024-06-25T14:52:40.015256465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:40.017446 containerd[1520]: time="2024-06-25T14:52:40.017401725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:52:40.023390 containerd[1520]: time="2024-06-25T14:52:40.023353903Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:40.029259 containerd[1520]: time="2024-06-25T14:52:40.029189679Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:40.035401 containerd[1520]: time="2024-06-25T14:52:40.035357498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:40.037106 containerd[1520]: time="2024-06-25T14:52:40.037054195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.72400925s" Jun 25 14:52:40.037323 containerd[1520]: time="2024-06-25T14:52:40.037292237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:52:40.045773 containerd[1520]: time="2024-06-25T14:52:40.045733078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:52:40.061287 containerd[1520]: time="2024-06-25T14:52:40.061209788Z" level=info msg="CreateContainer within sandbox \"124841eaff29e1b73b0b6b036b20ddf85f2798c961520e810d6600935078cab5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:52:40.095647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902152444.mount: Deactivated successfully. Jun 25 14:52:40.118538 containerd[1520]: time="2024-06-25T14:52:40.118482339Z" level=info msg="CreateContainer within sandbox \"124841eaff29e1b73b0b6b036b20ddf85f2798c961520e810d6600935078cab5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6114713de2a205d28a785059de6dbc25ad4458e6d58b416a24fa59490939c883\"" Jun 25 14:52:40.119180 containerd[1520]: time="2024-06-25T14:52:40.119148226Z" level=info msg="StartContainer for \"6114713de2a205d28a785059de6dbc25ad4458e6d58b416a24fa59490939c883\"" Jun 25 14:52:40.144448 systemd[1]: Started cri-containerd-6114713de2a205d28a785059de6dbc25ad4458e6d58b416a24fa59490939c883.scope - libcontainer container 6114713de2a205d28a785059de6dbc25ad4458e6d58b416a24fa59490939c883. Jun 25 14:52:40.157000 audit: BPF prog-id=152 op=LOAD Jun 25 14:52:40.157000 audit: BPF prog-id=153 op=LOAD Jun 25 14:52:40.157000 audit[3449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=3317 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:40.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313437313364653261323035643238613738353035396465366462 Jun 25 14:52:40.157000 audit: BPF prog-id=154 op=LOAD Jun 25 14:52:40.157000 audit[3449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=3317 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:40.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313437313364653261323035643238613738353035396465366462 Jun 25 14:52:40.158000 audit: BPF prog-id=154 op=UNLOAD Jun 25 14:52:40.158000 audit: BPF prog-id=153 op=UNLOAD Jun 25 14:52:40.158000 audit: BPF prog-id=155 op=LOAD Jun 25 14:52:40.158000 audit[3449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=3317 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:40.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313437313364653261323035643238613738353035396465366462 Jun 25 14:52:40.183317 containerd[1520]: time="2024-06-25T14:52:40.183222523Z" level=info msg="StartContainer for \"6114713de2a205d28a785059de6dbc25ad4458e6d58b416a24fa59490939c883\" returns successfully" Jun 25 14:52:41.053959 kubelet[2883]: E0625 14:52:41.053840 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.053959 kubelet[2883]: W0625 14:52:41.053867 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.053959 kubelet[2883]: E0625 14:52:41.053898 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.054824 kubelet[2883]: E0625 14:52:41.054084 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.054824 kubelet[2883]: W0625 14:52:41.054093 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.054824 kubelet[2883]: E0625 14:52:41.054105 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.054824 kubelet[2883]: E0625 14:52:41.054280 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.054824 kubelet[2883]: W0625 14:52:41.054289 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.054824 kubelet[2883]: E0625 14:52:41.054300 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.054824 kubelet[2883]: E0625 14:52:41.054469 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.054824 kubelet[2883]: W0625 14:52:41.054476 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.054824 kubelet[2883]: E0625 14:52:41.054489 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.054824 kubelet[2883]: E0625 14:52:41.054655 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.055382 kubelet[2883]: W0625 14:52:41.054662 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.055382 kubelet[2883]: E0625 14:52:41.054672 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.055382 kubelet[2883]: E0625 14:52:41.054807 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.055382 kubelet[2883]: W0625 14:52:41.054814 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.055382 kubelet[2883]: E0625 14:52:41.054824 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.055382 kubelet[2883]: E0625 14:52:41.054951 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.055382 kubelet[2883]: W0625 14:52:41.054957 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.055382 kubelet[2883]: E0625 14:52:41.054967 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.055382 kubelet[2883]: E0625 14:52:41.055099 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.055382 kubelet[2883]: W0625 14:52:41.055106 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.055883 kubelet[2883]: E0625 14:52:41.055116 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.055883 kubelet[2883]: E0625 14:52:41.055266 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.055883 kubelet[2883]: W0625 14:52:41.055274 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.055883 kubelet[2883]: E0625 14:52:41.055284 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.055883 kubelet[2883]: E0625 14:52:41.055424 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.055883 kubelet[2883]: W0625 14:52:41.055430 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.055883 kubelet[2883]: E0625 14:52:41.055442 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.055883 kubelet[2883]: E0625 14:52:41.055558 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.055883 kubelet[2883]: W0625 14:52:41.055573 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.055883 kubelet[2883]: E0625 14:52:41.055582 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.056298 kubelet[2883]: E0625 14:52:41.055696 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.056298 kubelet[2883]: W0625 14:52:41.055711 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.056298 kubelet[2883]: E0625 14:52:41.055720 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.056298 kubelet[2883]: E0625 14:52:41.055857 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.056298 kubelet[2883]: W0625 14:52:41.055865 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.056298 kubelet[2883]: E0625 14:52:41.055875 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.056298 kubelet[2883]: E0625 14:52:41.055995 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.056298 kubelet[2883]: W0625 14:52:41.056010 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.056298 kubelet[2883]: E0625 14:52:41.056020 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.056298 kubelet[2883]: E0625 14:52:41.056268 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.056934 kubelet[2883]: W0625 14:52:41.056278 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.056934 kubelet[2883]: E0625 14:52:41.056290 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.141004 kubelet[2883]: E0625 14:52:41.140810 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.141004 kubelet[2883]: W0625 14:52:41.140832 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.141004 kubelet[2883]: E0625 14:52:41.140860 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.141484 kubelet[2883]: E0625 14:52:41.141323 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.141484 kubelet[2883]: W0625 14:52:41.141337 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.141484 kubelet[2883]: E0625 14:52:41.141362 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.141827 kubelet[2883]: E0625 14:52:41.141667 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.141827 kubelet[2883]: W0625 14:52:41.141679 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.141827 kubelet[2883]: E0625 14:52:41.141695 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.142138 kubelet[2883]: E0625 14:52:41.142008 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.142138 kubelet[2883]: W0625 14:52:41.142019 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.142138 kubelet[2883]: E0625 14:52:41.142049 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.142449 kubelet[2883]: E0625 14:52:41.142322 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.142449 kubelet[2883]: W0625 14:52:41.142333 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.142449 kubelet[2883]: E0625 14:52:41.142357 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.142721 kubelet[2883]: E0625 14:52:41.142614 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.142721 kubelet[2883]: W0625 14:52:41.142624 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.142721 kubelet[2883]: E0625 14:52:41.142656 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.143018 kubelet[2883]: E0625 14:52:41.142875 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.143018 kubelet[2883]: W0625 14:52:41.142886 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.143018 kubelet[2883]: E0625 14:52:41.142915 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.143421 kubelet[2883]: E0625 14:52:41.143223 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.143421 kubelet[2883]: W0625 14:52:41.143260 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.143899 kubelet[2883]: E0625 14:52:41.143545 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.143899 kubelet[2883]: E0625 14:52:41.143599 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.143899 kubelet[2883]: W0625 14:52:41.143614 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.143899 kubelet[2883]: E0625 14:52:41.143625 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.145574 kubelet[2883]: E0625 14:52:41.144082 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.145574 kubelet[2883]: W0625 14:52:41.144093 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.145574 kubelet[2883]: E0625 14:52:41.144106 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.146126 kubelet[2883]: E0625 14:52:41.145869 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.146126 kubelet[2883]: W0625 14:52:41.145890 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.146126 kubelet[2883]: E0625 14:52:41.145912 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.146596 kubelet[2883]: E0625 14:52:41.146372 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.146596 kubelet[2883]: W0625 14:52:41.146385 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.146596 kubelet[2883]: E0625 14:52:41.146399 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.147389 kubelet[2883]: E0625 14:52:41.146768 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.147389 kubelet[2883]: W0625 14:52:41.146781 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.147389 kubelet[2883]: E0625 14:52:41.146868 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.147698 kubelet[2883]: E0625 14:52:41.147574 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.147698 kubelet[2883]: W0625 14:52:41.147587 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.147698 kubelet[2883]: E0625 14:52:41.147623 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.147959 kubelet[2883]: E0625 14:52:41.147946 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.148170 kubelet[2883]: W0625 14:52:41.148040 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.148170 kubelet[2883]: E0625 14:52:41.148064 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.148439 kubelet[2883]: E0625 14:52:41.148425 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.148529 kubelet[2883]: W0625 14:52:41.148518 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.148670 kubelet[2883]: E0625 14:52:41.148658 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.149068 kubelet[2883]: E0625 14:52:41.149056 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.149175 kubelet[2883]: W0625 14:52:41.149162 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.149267 kubelet[2883]: E0625 14:52:41.149256 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.149700 kubelet[2883]: E0625 14:52:41.149685 2883 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:52:41.149809 kubelet[2883]: W0625 14:52:41.149796 2883 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:52:41.149885 kubelet[2883]: E0625 14:52:41.149876 2883 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:52:41.229384 containerd[1520]: time="2024-06-25T14:52:41.229338325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:41.232056 containerd[1520]: time="2024-06-25T14:52:41.232011471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:52:41.236361 containerd[1520]: time="2024-06-25T14:52:41.236322991Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:41.241141 containerd[1520]: time="2024-06-25T14:52:41.241097957Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:41.245715 containerd[1520]: time="2024-06-25T14:52:41.245665240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:41.246493 containerd[1520]: time="2024-06-25T14:52:41.246438687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.200051842s" Jun 25 14:52:41.246493 containerd[1520]: time="2024-06-25T14:52:41.246488008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:52:41.248537 containerd[1520]: time="2024-06-25T14:52:41.248480987Z" level=info msg="CreateContainer within sandbox \"865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:52:41.276931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910114413.mount: Deactivated successfully. Jun 25 14:52:41.299074 containerd[1520]: time="2024-06-25T14:52:41.299007425Z" level=info msg="CreateContainer within sandbox \"865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7\"" Jun 25 14:52:41.299862 containerd[1520]: time="2024-06-25T14:52:41.299824633Z" level=info msg="StartContainer for \"27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7\"" Jun 25 14:52:41.331433 systemd[1]: Started cri-containerd-27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7.scope - libcontainer container 27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7. Jun 25 14:52:41.349000 audit: BPF prog-id=156 op=LOAD Jun 25 14:52:41.349000 audit[3524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=3367 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:41.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237383230663734373632623338373139623835346337386264646466 Jun 25 14:52:41.349000 audit: BPF prog-id=157 op=LOAD Jun 25 14:52:41.349000 audit[3524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=3367 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:41.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237383230663734373632623338373139623835346337386264646466 Jun 25 14:52:41.349000 audit: BPF prog-id=157 op=UNLOAD Jun 25 14:52:41.350000 audit: BPF prog-id=156 op=UNLOAD Jun 25 14:52:41.350000 audit: BPF prog-id=158 op=LOAD Jun 25 14:52:41.350000 audit[3524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=3367 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:41.350000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237383230663734373632623338373139623835346337386264646466 Jun 25 14:52:41.372296 containerd[1520]: time="2024-06-25T14:52:41.372201159Z" level=info msg="StartContainer for \"27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7\" returns successfully" Jun 25 14:52:41.390261 systemd[1]: cri-containerd-27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7.scope: Deactivated successfully. Jun 25 14:52:41.393000 audit: BPF prog-id=158 op=UNLOAD Jun 25 14:52:41.938114 kubelet[2883]: E0625 14:52:41.937724 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rstqt" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" Jun 25 14:52:42.035265 kubelet[2883]: I0625 14:52:42.035214 2883 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:52:42.049914 systemd[1]: run-containerd-runc-k8s.io-27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7-runc.ozYCJA.mount: Deactivated successfully. Jun 25 14:52:42.049998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7-rootfs.mount: Deactivated successfully. Jun 25 14:52:42.051636 kubelet[2883]: I0625 14:52:42.051360 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5cfd97c569-f8b6v" podStartSLOduration=3.321724441 podStartE2EDuration="5.051301624s" podCreationTimestamp="2024-06-25 14:52:37 +0000 UTC" firstStartedPulling="2024-06-25 14:52:38.312601541 +0000 UTC m=+20.495249776" lastFinishedPulling="2024-06-25 14:52:40.042178724 +0000 UTC m=+22.224826959" observedRunningTime="2024-06-25 14:52:41.044335773 +0000 UTC m=+23.226984008" watchObservedRunningTime="2024-06-25 14:52:42.051301624 +0000 UTC m=+24.233949859" Jun 25 14:52:42.262218 containerd[1520]: time="2024-06-25T14:52:42.262071068Z" level=info msg="shim disconnected" id=27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7 namespace=k8s.io Jun 25 14:52:42.262218 containerd[1520]: time="2024-06-25T14:52:42.262131229Z" level=warning msg="cleaning up after shim disconnected" id=27820f74762b38719b854c78bdddf170ca6860311bd7701ea4cfe4b276c26af7 namespace=k8s.io Jun 25 14:52:42.262218 containerd[1520]: time="2024-06-25T14:52:42.262141029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:52:43.039364 containerd[1520]: time="2024-06-25T14:52:43.039319984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:52:43.937591 kubelet[2883]: E0625 14:52:43.937245 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rstqt" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" Jun 25 14:52:45.815652 containerd[1520]: time="2024-06-25T14:52:45.815607169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:45.823889 containerd[1520]: time="2024-06-25T14:52:45.823842802Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:45.824125 containerd[1520]: time="2024-06-25T14:52:45.823954123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:52:45.827980 containerd[1520]: time="2024-06-25T14:52:45.827942279Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:45.831485 containerd[1520]: time="2024-06-25T14:52:45.831431910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:45.833199 containerd[1520]: time="2024-06-25T14:52:45.833142685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.793773141s" Jun 25 14:52:45.833199 containerd[1520]: time="2024-06-25T14:52:45.833199405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:52:45.835972 containerd[1520]: time="2024-06-25T14:52:45.835885989Z" level=info msg="CreateContainer within sandbox \"865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:52:45.860957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984154496.mount: Deactivated successfully. Jun 25 14:52:45.873106 containerd[1520]: time="2024-06-25T14:52:45.873037559Z" level=info msg="CreateContainer within sandbox \"865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68\"" Jun 25 14:52:45.874898 containerd[1520]: time="2024-06-25T14:52:45.874457612Z" level=info msg="StartContainer for \"5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68\"" Jun 25 14:52:45.908457 systemd[1]: Started cri-containerd-5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68.scope - libcontainer container 5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68. Jun 25 14:52:45.919000 audit: BPF prog-id=159 op=LOAD Jun 25 14:52:45.925290 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 14:52:45.925415 kernel: audit: type=1334 audit(1719327165.919:462): prog-id=159 op=LOAD Jun 25 14:52:45.919000 audit[3599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=3367 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:45.940811 kubelet[2883]: E0625 14:52:45.938200 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rstqt" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" Jun 25 14:52:45.955305 kernel: audit: type=1300 audit(1719327165.919:462): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=3367 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:45.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535343766356334346664306235646533323536633039626230383436 Jun 25 14:52:45.979752 kernel: audit: type=1327 audit(1719327165.919:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535343766356334346664306235646533323536633039626230383436 Jun 25 14:52:45.919000 audit: BPF prog-id=160 op=LOAD Jun 25 14:52:45.987404 kernel: audit: type=1334 audit(1719327165.919:463): prog-id=160 op=LOAD Jun 25 14:52:45.919000 audit[3599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=3367 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:46.010768 kernel: audit: type=1300 audit(1719327165.919:463): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=3367 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:46.012552 kernel: audit: type=1327 audit(1719327165.919:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535343766356334346664306235646533323536633039626230383436 Jun 25 14:52:45.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535343766356334346664306235646533323536633039626230383436 Jun 25 14:52:45.919000 audit: BPF prog-id=160 op=UNLOAD Jun 25 14:52:46.037243 containerd[1520]: time="2024-06-25T14:52:46.037186811Z" level=info msg="StartContainer for \"5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68\" returns successfully" Jun 25 14:52:46.041501 kernel: audit: type=1334 audit(1719327165.919:464): prog-id=160 op=UNLOAD Jun 25 14:52:45.919000 audit: BPF prog-id=159 op=UNLOAD Jun 25 14:52:46.048678 kernel: audit: type=1334 audit(1719327165.919:465): prog-id=159 op=UNLOAD Jun 25 14:52:45.919000 audit: BPF prog-id=161 op=LOAD Jun 25 14:52:46.054750 kernel: audit: type=1334 audit(1719327165.919:466): prog-id=161 op=LOAD Jun 25 14:52:45.919000 audit[3599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=3367 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:46.078084 kernel: audit: type=1300 audit(1719327165.919:466): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=3367 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:45.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535343766356334346664306235646533323536633039626230383436 Jun 25 14:52:46.857457 systemd[1]: run-containerd-runc-k8s.io-5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68-runc.AgWT2t.mount: Deactivated successfully. Jun 25 14:52:46.987526 containerd[1520]: time="2024-06-25T14:52:46.987469996Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:52:46.989857 systemd[1]: cri-containerd-5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68.scope: Deactivated successfully. Jun 25 14:52:46.993000 audit: BPF prog-id=161 op=UNLOAD Jun 25 14:52:47.014143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68-rootfs.mount: Deactivated successfully. Jun 25 14:52:47.078333 kubelet[2883]: I0625 14:52:47.077624 2883 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 14:52:47.290870 kubelet[2883]: I0625 14:52:47.100010 2883 topology_manager.go:215] "Topology Admit Handler" podUID="452cade1-fc01-42b4-8e1a-60614efcd66d" podNamespace="kube-system" podName="coredns-76f75df574-kptq5" Jun 25 14:52:47.290870 kubelet[2883]: I0625 14:52:47.106456 2883 topology_manager.go:215] "Topology Admit Handler" podUID="a00cdd34-fa68-4d2a-acef-128a84544a34" podNamespace="calico-system" podName="calico-kube-controllers-5fd654d6cc-fn6hn" Jun 25 14:52:47.290870 kubelet[2883]: I0625 14:52:47.112447 2883 topology_manager.go:215] "Topology Admit Handler" podUID="60c4be3a-a62f-44dc-95b7-ebe069fe4d27" podNamespace="kube-system" podName="coredns-76f75df574-m6gb7" Jun 25 14:52:47.290870 kubelet[2883]: W0625 14:52:47.118802 2883 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3815.2.4-a-39232a46a6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.4-a-39232a46a6' and this object Jun 25 14:52:47.290870 kubelet[2883]: E0625 14:52:47.118857 2883 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3815.2.4-a-39232a46a6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.4-a-39232a46a6' and this object Jun 25 14:52:47.290870 kubelet[2883]: I0625 14:52:47.282418 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60c4be3a-a62f-44dc-95b7-ebe069fe4d27-config-volume\") pod \"coredns-76f75df574-m6gb7\" (UID: \"60c4be3a-a62f-44dc-95b7-ebe069fe4d27\") " pod="kube-system/coredns-76f75df574-m6gb7" Jun 25 14:52:47.290870 kubelet[2883]: I0625 14:52:47.282462 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm4jf\" (UniqueName: \"kubernetes.io/projected/60c4be3a-a62f-44dc-95b7-ebe069fe4d27-kube-api-access-fm4jf\") pod \"coredns-76f75df574-m6gb7\" (UID: \"60c4be3a-a62f-44dc-95b7-ebe069fe4d27\") " pod="kube-system/coredns-76f75df574-m6gb7" Jun 25 14:52:47.108334 systemd[1]: Created slice kubepods-burstable-pod452cade1_fc01_42b4_8e1a_60614efcd66d.slice - libcontainer container kubepods-burstable-pod452cade1_fc01_42b4_8e1a_60614efcd66d.slice. Jun 25 14:52:47.291225 kubelet[2883]: I0625 14:52:47.282489 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a00cdd34-fa68-4d2a-acef-128a84544a34-tigera-ca-bundle\") pod \"calico-kube-controllers-5fd654d6cc-fn6hn\" (UID: \"a00cdd34-fa68-4d2a-acef-128a84544a34\") " pod="calico-system/calico-kube-controllers-5fd654d6cc-fn6hn" Jun 25 14:52:47.291225 kubelet[2883]: I0625 14:52:47.282551 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z78bj\" (UniqueName: \"kubernetes.io/projected/452cade1-fc01-42b4-8e1a-60614efcd66d-kube-api-access-z78bj\") pod \"coredns-76f75df574-kptq5\" (UID: \"452cade1-fc01-42b4-8e1a-60614efcd66d\") " pod="kube-system/coredns-76f75df574-kptq5" Jun 25 14:52:47.291225 kubelet[2883]: I0625 14:52:47.282594 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2rx\" (UniqueName: \"kubernetes.io/projected/a00cdd34-fa68-4d2a-acef-128a84544a34-kube-api-access-jf2rx\") pod \"calico-kube-controllers-5fd654d6cc-fn6hn\" (UID: \"a00cdd34-fa68-4d2a-acef-128a84544a34\") " pod="calico-system/calico-kube-controllers-5fd654d6cc-fn6hn" Jun 25 14:52:47.291225 kubelet[2883]: I0625 14:52:47.282626 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/452cade1-fc01-42b4-8e1a-60614efcd66d-config-volume\") pod \"coredns-76f75df574-kptq5\" (UID: \"452cade1-fc01-42b4-8e1a-60614efcd66d\") " pod="kube-system/coredns-76f75df574-kptq5" Jun 25 14:52:47.117377 systemd[1]: Created slice kubepods-besteffort-poda00cdd34_fa68_4d2a_acef_128a84544a34.slice - libcontainer container kubepods-besteffort-poda00cdd34_fa68_4d2a_acef_128a84544a34.slice. Jun 25 14:52:47.126090 systemd[1]: Created slice kubepods-burstable-pod60c4be3a_a62f_44dc_95b7_ebe069fe4d27.slice - libcontainer container kubepods-burstable-pod60c4be3a_a62f_44dc_95b7_ebe069fe4d27.slice. Jun 25 14:52:47.596249 containerd[1520]: time="2024-06-25T14:52:47.596051156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fd654d6cc-fn6hn,Uid:a00cdd34-fa68-4d2a-acef-128a84544a34,Namespace:calico-system,Attempt:0,}" Jun 25 14:52:47.943421 systemd[1]: Created slice kubepods-besteffort-pode7892aac_4fe7_4e98_ad8c_38ff0dbdd0b3.slice - libcontainer container kubepods-besteffort-pode7892aac_4fe7_4e98_ad8c_38ff0dbdd0b3.slice. Jun 25 14:52:47.946181 containerd[1520]: time="2024-06-25T14:52:47.946075648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rstqt,Uid:e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3,Namespace:calico-system,Attempt:0,}" Jun 25 14:52:48.099720 containerd[1520]: time="2024-06-25T14:52:48.099530076Z" level=info msg="shim disconnected" id=5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68 namespace=k8s.io Jun 25 14:52:48.099720 containerd[1520]: time="2024-06-25T14:52:48.099587277Z" level=warning msg="cleaning up after shim disconnected" id=5547f5c44fd0b5de3256c09bb0846622ccf688cde801440c910a688c4de68e68 namespace=k8s.io Jun 25 14:52:48.099720 containerd[1520]: time="2024-06-25T14:52:48.099596277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:52:48.192396 containerd[1520]: time="2024-06-25T14:52:48.192353983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kptq5,Uid:452cade1-fc01-42b4-8e1a-60614efcd66d,Namespace:kube-system,Attempt:0,}" Jun 25 14:52:48.195592 containerd[1520]: time="2024-06-25T14:52:48.195068246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m6gb7,Uid:60c4be3a-a62f-44dc-95b7-ebe069fe4d27,Namespace:kube-system,Attempt:0,}" Jun 25 14:52:48.207633 containerd[1520]: time="2024-06-25T14:52:48.207565152Z" level=error msg="Failed to destroy network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.209866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22-shm.mount: Deactivated successfully. Jun 25 14:52:48.213635 containerd[1520]: time="2024-06-25T14:52:48.213563883Z" level=error msg="encountered an error cleaning up failed sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.213853 containerd[1520]: time="2024-06-25T14:52:48.213826165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rstqt,Uid:e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.214434 kubelet[2883]: E0625 14:52:48.214129 2883 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.214434 kubelet[2883]: E0625 14:52:48.214183 2883 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rstqt" Jun 25 14:52:48.214434 kubelet[2883]: E0625 14:52:48.214203 2883 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rstqt" Jun 25 14:52:48.214767 kubelet[2883]: E0625 14:52:48.214272 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rstqt_calico-system(e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rstqt_calico-system(e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rstqt" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" Jun 25 14:52:48.218863 containerd[1520]: time="2024-06-25T14:52:48.218797607Z" level=error msg="Failed to destroy network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.219299 containerd[1520]: time="2024-06-25T14:52:48.219259211Z" level=error msg="encountered an error cleaning up failed sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.219367 containerd[1520]: time="2024-06-25T14:52:48.219326292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fd654d6cc-fn6hn,Uid:a00cdd34-fa68-4d2a-acef-128a84544a34,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.219556 kubelet[2883]: E0625 14:52:48.219526 2883 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.219639 kubelet[2883]: E0625 14:52:48.219582 2883 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fd654d6cc-fn6hn" Jun 25 14:52:48.219639 kubelet[2883]: E0625 14:52:48.219604 2883 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fd654d6cc-fn6hn" Jun 25 14:52:48.219700 kubelet[2883]: E0625 14:52:48.219661 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fd654d6cc-fn6hn_calico-system(a00cdd34-fa68-4d2a-acef-128a84544a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fd654d6cc-fn6hn_calico-system(a00cdd34-fa68-4d2a-acef-128a84544a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fd654d6cc-fn6hn" podUID="a00cdd34-fa68-4d2a-acef-128a84544a34" Jun 25 14:52:48.295689 containerd[1520]: time="2024-06-25T14:52:48.295625379Z" level=error msg="Failed to destroy network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.296273 containerd[1520]: time="2024-06-25T14:52:48.296220664Z" level=error msg="encountered an error cleaning up failed sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.296440 containerd[1520]: time="2024-06-25T14:52:48.296402385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m6gb7,Uid:60c4be3a-a62f-44dc-95b7-ebe069fe4d27,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.296828 kubelet[2883]: E0625 14:52:48.296766 2883 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.297201 kubelet[2883]: E0625 14:52:48.296939 2883 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m6gb7" Jun 25 14:52:48.297201 kubelet[2883]: E0625 14:52:48.296966 2883 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m6gb7" Jun 25 14:52:48.297201 kubelet[2883]: E0625 14:52:48.297024 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m6gb7_kube-system(60c4be3a-a62f-44dc-95b7-ebe069fe4d27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m6gb7_kube-system(60c4be3a-a62f-44dc-95b7-ebe069fe4d27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m6gb7" podUID="60c4be3a-a62f-44dc-95b7-ebe069fe4d27" Jun 25 14:52:48.306925 containerd[1520]: time="2024-06-25T14:52:48.306866634Z" level=error msg="Failed to destroy network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.307426 containerd[1520]: time="2024-06-25T14:52:48.307374518Z" level=error msg="encountered an error cleaning up failed sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.307559 containerd[1520]: time="2024-06-25T14:52:48.307532119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kptq5,Uid:452cade1-fc01-42b4-8e1a-60614efcd66d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.307900 kubelet[2883]: E0625 14:52:48.307871 2883 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:48.307972 kubelet[2883]: E0625 14:52:48.307941 2883 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kptq5" Jun 25 14:52:48.307972 kubelet[2883]: E0625 14:52:48.307968 2883 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kptq5" Jun 25 14:52:48.308168 kubelet[2883]: E0625 14:52:48.308041 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kptq5_kube-system(452cade1-fc01-42b4-8e1a-60614efcd66d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kptq5_kube-system(452cade1-fc01-42b4-8e1a-60614efcd66d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kptq5" podUID="452cade1-fc01-42b4-8e1a-60614efcd66d" Jun 25 14:52:49.063840 kubelet[2883]: I0625 14:52:49.063811 2883 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:52:49.064877 containerd[1520]: time="2024-06-25T14:52:49.064838211Z" level=info msg="StopPodSandbox for \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\"" Jun 25 14:52:49.065411 containerd[1520]: time="2024-06-25T14:52:49.065382216Z" level=info msg="Ensure that sandbox f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5 in task-service has been cleanup successfully" Jun 25 14:52:49.066330 kubelet[2883]: I0625 14:52:49.066304 2883 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:52:49.067116 containerd[1520]: time="2024-06-25T14:52:49.067075550Z" level=info msg="StopPodSandbox for \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\"" Jun 25 14:52:49.067503 containerd[1520]: time="2024-06-25T14:52:49.067480233Z" level=info msg="Ensure that sandbox 83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48 in task-service has been cleanup successfully" Jun 25 14:52:49.071539 containerd[1520]: time="2024-06-25T14:52:49.071499827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:52:49.072718 kubelet[2883]: I0625 14:52:49.072690 2883 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:52:49.073558 containerd[1520]: time="2024-06-25T14:52:49.073436003Z" level=info msg="StopPodSandbox for \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\"" Jun 25 14:52:49.073696 containerd[1520]: time="2024-06-25T14:52:49.073665045Z" level=info msg="Ensure that sandbox f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3 in task-service has been cleanup successfully" Jun 25 14:52:49.077164 kubelet[2883]: I0625 14:52:49.077125 2883 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:52:49.077737 containerd[1520]: time="2024-06-25T14:52:49.077691599Z" level=info msg="StopPodSandbox for \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\"" Jun 25 14:52:49.077930 containerd[1520]: time="2024-06-25T14:52:49.077902840Z" level=info msg="Ensure that sandbox 477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22 in task-service has been cleanup successfully" Jun 25 14:52:49.131703 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5-shm.mount: Deactivated successfully. Jun 25 14:52:49.139349 containerd[1520]: time="2024-06-25T14:52:49.139274073Z" level=error msg="StopPodSandbox for \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\" failed" error="failed to destroy network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:49.140226 kubelet[2883]: E0625 14:52:49.140023 2883 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:52:49.140226 kubelet[2883]: E0625 14:52:49.140113 2883 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5"} Jun 25 14:52:49.140226 kubelet[2883]: E0625 14:52:49.140150 2883 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a00cdd34-fa68-4d2a-acef-128a84544a34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:52:49.140226 kubelet[2883]: E0625 14:52:49.140190 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a00cdd34-fa68-4d2a-acef-128a84544a34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fd654d6cc-fn6hn" podUID="a00cdd34-fa68-4d2a-acef-128a84544a34" Jun 25 14:52:49.160262 containerd[1520]: time="2024-06-25T14:52:49.160186648Z" level=error msg="StopPodSandbox for \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\" failed" error="failed to destroy network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:49.160837 kubelet[2883]: E0625 14:52:49.160652 2883 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:52:49.160837 kubelet[2883]: E0625 14:52:49.160707 2883 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48"} Jun 25 14:52:49.160837 kubelet[2883]: E0625 14:52:49.160742 2883 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"452cade1-fc01-42b4-8e1a-60614efcd66d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:52:49.160837 kubelet[2883]: E0625 14:52:49.160787 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"452cade1-fc01-42b4-8e1a-60614efcd66d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kptq5" podUID="452cade1-fc01-42b4-8e1a-60614efcd66d" Jun 25 14:52:49.162850 containerd[1520]: time="2024-06-25T14:52:49.162794869Z" level=error msg="StopPodSandbox for \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\" failed" error="failed to destroy network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:49.163463 kubelet[2883]: E0625 14:52:49.163280 2883 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:52:49.163463 kubelet[2883]: E0625 14:52:49.163326 2883 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3"} Jun 25 14:52:49.163463 kubelet[2883]: E0625 14:52:49.163388 2883 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60c4be3a-a62f-44dc-95b7-ebe069fe4d27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:52:49.163463 kubelet[2883]: E0625 14:52:49.163434 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60c4be3a-a62f-44dc-95b7-ebe069fe4d27\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m6gb7" podUID="60c4be3a-a62f-44dc-95b7-ebe069fe4d27" Jun 25 14:52:49.164655 containerd[1520]: time="2024-06-25T14:52:49.164606685Z" level=error msg="StopPodSandbox for \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\" failed" error="failed to destroy network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:52:49.165120 kubelet[2883]: E0625 14:52:49.164971 2883 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:52:49.165120 kubelet[2883]: E0625 14:52:49.165020 2883 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22"} Jun 25 14:52:49.165120 kubelet[2883]: E0625 14:52:49.165051 2883 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:52:49.165120 kubelet[2883]: E0625 14:52:49.165100 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rstqt" podUID="e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3" Jun 25 14:52:52.839315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839761080.mount: Deactivated successfully. Jun 25 14:52:53.049928 containerd[1520]: time="2024-06-25T14:52:53.049858960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:53.051965 containerd[1520]: time="2024-06-25T14:52:53.051909736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:52:53.055477 containerd[1520]: time="2024-06-25T14:52:53.055438564Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:53.059384 containerd[1520]: time="2024-06-25T14:52:53.059327674Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:53.063381 containerd[1520]: time="2024-06-25T14:52:53.063328226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:52:53.064316 containerd[1520]: time="2024-06-25T14:52:53.064273073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.992518804s" Jun 25 14:52:53.064463 containerd[1520]: time="2024-06-25T14:52:53.064442795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:52:53.079788 containerd[1520]: time="2024-06-25T14:52:53.079735475Z" level=info msg="CreateContainer within sandbox \"865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:52:53.120748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258179272.mount: Deactivated successfully. Jun 25 14:52:53.138590 containerd[1520]: time="2024-06-25T14:52:53.138531459Z" level=info msg="CreateContainer within sandbox \"865c3ea77221d0aadf97990287e25bb4a31134961a6f740f1a30b4e891f05c17\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144\"" Jun 25 14:52:53.140007 containerd[1520]: time="2024-06-25T14:52:53.139956871Z" level=info msg="StartContainer for \"caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144\"" Jun 25 14:52:53.164451 systemd[1]: Started cri-containerd-caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144.scope - libcontainer container caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144. Jun 25 14:52:53.176000 audit: BPF prog-id=162 op=LOAD Jun 25 14:52:53.181477 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 14:52:53.181585 kernel: audit: type=1334 audit(1719327173.176:468): prog-id=162 op=LOAD Jun 25 14:52:53.176000 audit[3887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3367 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:53.216508 kernel: audit: type=1300 audit(1719327173.176:468): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3367 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:53.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656464646132383933653432373164306339373437393162333338 Jun 25 14:52:53.241712 kernel: audit: type=1327 audit(1719327173.176:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656464646132383933653432373164306339373437393162333338 Jun 25 14:52:53.176000 audit: BPF prog-id=163 op=LOAD Jun 25 14:52:53.249310 kernel: audit: type=1334 audit(1719327173.176:469): prog-id=163 op=LOAD Jun 25 14:52:53.176000 audit[3887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3367 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:53.272058 kernel: audit: type=1300 audit(1719327173.176:469): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3367 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:53.297379 kernel: audit: type=1327 audit(1719327173.176:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656464646132383933653432373164306339373437393162333338 Jun 25 14:52:53.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656464646132383933653432373164306339373437393162333338 Jun 25 14:52:53.180000 audit: BPF prog-id=163 op=UNLOAD Jun 25 14:52:53.304206 kernel: audit: type=1334 audit(1719327173.180:470): prog-id=163 op=UNLOAD Jun 25 14:52:53.180000 audit: BPF prog-id=162 op=UNLOAD Jun 25 14:52:53.310381 kernel: audit: type=1334 audit(1719327173.180:471): prog-id=162 op=UNLOAD Jun 25 14:52:53.312580 containerd[1520]: time="2024-06-25T14:52:53.312526072Z" level=info msg="StartContainer for \"caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144\" returns successfully" Jun 25 14:52:53.180000 audit: BPF prog-id=164 op=LOAD Jun 25 14:52:53.180000 audit[3887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3367 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:53.347664 kernel: audit: type=1334 audit(1719327173.180:472): prog-id=164 op=LOAD Jun 25 14:52:53.347816 kernel: audit: type=1300 audit(1719327173.180:472): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3367 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:53.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361656464646132383933653432373164306339373437393162333338 Jun 25 14:52:53.628684 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:52:53.628851 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:52:54.961000 audit[3996]: AVC avc: denied { write } for pid=3996 comm="tee" name="fd" dev="proc" ino=25454 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:52:54.961000 audit[3996]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6a87a0e a2=241 a3=1b6 items=1 ppid=3962 pid=3996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:54.961000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:52:54.961000 audit: PATH item=0 name="/dev/fd/63" inode=24403 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:52:54.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:52:54.969000 audit[3985]: AVC avc: denied { write } for pid=3985 comm="tee" name="fd" dev="proc" ino=24413 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:52:54.969000 audit[3985]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdc672a0c a2=241 a3=1b6 items=1 ppid=3965 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:54.969000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:52:54.969000 audit: PATH item=0 name="/dev/fd/63" inode=24396 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:52:54.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:52:54.978000 audit[4001]: AVC avc: denied { write } for pid=4001 comm="tee" name="fd" dev="proc" ino=24419 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:52:54.978000 audit[4001]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdbb46a0c a2=241 a3=1b6 items=1 ppid=3955 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:54.978000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:52:54.978000 audit: PATH item=0 name="/dev/fd/63" inode=24410 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:52:54.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:52:54.990000 audit[4011]: AVC avc: denied { write } for pid=4011 comm="tee" name="fd" dev="proc" ino=24425 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:52:54.990000 audit[4011]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff0cab9fd a2=241 a3=1b6 items=1 ppid=3950 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:54.990000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:52:54.990000 audit: PATH item=0 name="/dev/fd/63" inode=25463 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:52:54.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:52:55.011000 audit[4008]: AVC avc: denied { write } for pid=4008 comm="tee" name="fd" dev="proc" ino=25474 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:52:55.011000 audit[4008]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffff575a0c a2=241 a3=1b6 items=1 ppid=3949 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:55.011000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:52:55.011000 audit: PATH item=0 name="/dev/fd/63" inode=25460 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:52:55.011000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:52:55.034000 audit[4024]: AVC avc: denied { write } for pid=4024 comm="tee" name="fd" dev="proc" ino=24438 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:52:55.032000 audit[4020]: AVC avc: denied { write } for pid=4020 comm="tee" name="fd" dev="proc" ino=24435 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:52:55.032000 audit[4020]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffca4429fc a2=241 a3=1b6 items=1 ppid=3956 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:55.032000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:52:55.032000 audit: PATH item=0 name="/dev/fd/63" inode=25470 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:52:55.032000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:52:55.034000 audit[4024]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe5aafa0d a2=241 a3=1b6 items=1 ppid=3953 pid=4024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:52:55.034000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:52:55.034000 audit: PATH item=0 name="/dev/fd/63" inode=25471 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:52:55.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:52:58.753821 kubelet[2883]: I0625 14:52:58.753770 2883 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:52:58.774772 systemd[1]: run-containerd-runc-k8s.io-caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144-runc.dXR8Q7.mount: Deactivated successfully. Jun 25 14:52:58.833083 systemd[1]: run-containerd-runc-k8s.io-caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144-runc.jRtSyg.mount: Deactivated successfully. Jun 25 14:53:01.072411 kubelet[2883]: I0625 14:53:01.072371 2883 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:53:01.083767 kubelet[2883]: I0625 14:53:01.083730 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-nfv4j" podStartSLOduration=9.378990888 podStartE2EDuration="24.083684711s" podCreationTimestamp="2024-06-25 14:52:37 +0000 UTC" firstStartedPulling="2024-06-25 14:52:38.360134135 +0000 UTC m=+20.542782370" lastFinishedPulling="2024-06-25 14:52:53.064827958 +0000 UTC m=+35.247476193" observedRunningTime="2024-06-25 14:52:54.11225649 +0000 UTC m=+36.294904725" watchObservedRunningTime="2024-06-25 14:53:01.083684711 +0000 UTC m=+43.266332946" Jun 25 14:53:01.102000 audit[4189]: NETFILTER_CFG table=filter:98 family=2 entries=15 op=nft_register_rule pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:01.107756 kernel: kauditd_printk_skb: 36 callbacks suppressed Jun 25 14:53:01.107854 kernel: audit: type=1325 audit(1719327181.102:480): table=filter:98 family=2 entries=15 op=nft_register_rule pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:01.102000 audit[4189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffe3813a20 a2=0 a3=1 items=0 ppid=3023 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.144041 kernel: audit: type=1300 audit(1719327181.102:480): arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffe3813a20 a2=0 a3=1 items=0 ppid=3023 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:01.157058 kernel: audit: type=1327 audit(1719327181.102:480): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:01.103000 audit[4189]: NETFILTER_CFG table=nat:99 family=2 entries=19 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:01.170095 kernel: audit: type=1325 audit(1719327181.103:481): table=nat:99 family=2 entries=19 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:01.103000 audit[4189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe3813a20 a2=0 a3=1 items=0 ppid=3023 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.194506 kernel: audit: type=1300 audit(1719327181.103:481): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe3813a20 a2=0 a3=1 items=0 ppid=3023 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:01.207446 kernel: audit: type=1327 audit(1719327181.103:481): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:01.475498 systemd-networkd[1257]: vxlan.calico: Link UP Jun 25 14:53:01.475505 systemd-networkd[1257]: vxlan.calico: Gained carrier Jun 25 14:53:01.486000 audit: BPF prog-id=165 op=LOAD Jun 25 14:53:01.486000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe428c768 a2=70 a3=ffffe428c7d8 items=0 ppid=4190 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.516170 kernel: audit: type=1334 audit(1719327181.486:482): prog-id=165 op=LOAD Jun 25 14:53:01.516300 kernel: audit: type=1300 audit(1719327181.486:482): arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe428c768 a2=70 a3=ffffe428c7d8 items=0 ppid=4190 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.486000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:53:01.543052 kernel: audit: type=1327 audit(1719327181.486:482): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:53:01.486000 audit: BPF prog-id=165 op=UNLOAD Jun 25 14:53:01.550768 kernel: audit: type=1334 audit(1719327181.486:483): prog-id=165 op=UNLOAD Jun 25 14:53:01.486000 audit: BPF prog-id=166 op=LOAD Jun 25 14:53:01.486000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe428c768 a2=70 a3=4b243c items=0 ppid=4190 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.486000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:53:01.492000 audit: BPF prog-id=166 op=UNLOAD Jun 25 14:53:01.492000 audit: BPF prog-id=167 op=LOAD Jun 25 14:53:01.492000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe428c708 a2=70 a3=ffffe428c778 items=0 ppid=4190 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.492000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:53:01.515000 audit: BPF prog-id=167 op=UNLOAD Jun 25 14:53:01.515000 audit: BPF prog-id=168 op=LOAD Jun 25 14:53:01.515000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe428c738 a2=70 a3=34db4a9 items=0 ppid=4190 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.515000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:53:01.554000 audit: BPF prog-id=168 op=UNLOAD Jun 25 14:53:01.686000 audit[4267]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:01.686000 audit[4267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff03ca130 a2=0 a3=ffff7fea5fa8 items=0 ppid=4190 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.686000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:01.689000 audit[4268]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:01.689000 audit[4268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc9d4b400 a2=0 a3=ffff80019fa8 items=0 ppid=4190 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.689000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:01.693000 audit[4269]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=4269 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:01.693000 audit[4269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffc488f950 a2=0 a3=ffffba01bfa8 items=0 ppid=4190 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.693000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:01.701000 audit[4266]: NETFILTER_CFG table=raw:103 family=2 entries=19 op=nft_register_chain pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:01.701000 audit[4266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=ffffc6f7b8f0 a2=0 a3=ffff93dfffa8 items=0 ppid=4190 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:01.701000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:01.939219 containerd[1520]: time="2024-06-25T14:53:01.939178005Z" level=info msg="StopPodSandbox for \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\"" Jun 25 14:53:01.940676 containerd[1520]: time="2024-06-25T14:53:01.939292966Z" level=info msg="StopPodSandbox for \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\"" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.037 [INFO][4319] k8s.go 608: Cleaning up netns ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.037 [INFO][4319] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" iface="eth0" netns="/var/run/netns/cni-cd854e29-5f3d-f1ba-138a-b71052372d55" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.037 [INFO][4319] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" iface="eth0" netns="/var/run/netns/cni-cd854e29-5f3d-f1ba-138a-b71052372d55" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.038 [INFO][4319] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" iface="eth0" netns="/var/run/netns/cni-cd854e29-5f3d-f1ba-138a-b71052372d55" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.038 [INFO][4319] k8s.go 615: Releasing IP address(es) ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.038 [INFO][4319] utils.go 188: Calico CNI releasing IP address ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.074 [INFO][4336] ipam_plugin.go 411: Releasing address using handleID ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.075 [INFO][4336] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.075 [INFO][4336] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.086 [WARNING][4336] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.086 [INFO][4336] ipam_plugin.go 439: Releasing address using workloadID ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.091 [INFO][4336] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:02.093406 containerd[1520]: 2024-06-25 14:53:02.092 [INFO][4319] k8s.go 621: Teardown processing complete. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:02.096901 systemd[1]: run-netns-cni\x2dcd854e29\x2d5f3d\x2df1ba\x2d138a\x2db71052372d55.mount: Deactivated successfully. Jun 25 14:53:02.098860 containerd[1520]: time="2024-06-25T14:53:02.098804894Z" level=info msg="TearDown network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\" successfully" Jun 25 14:53:02.098991 containerd[1520]: time="2024-06-25T14:53:02.098973415Z" level=info msg="StopPodSandbox for \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\" returns successfully" Jun 25 14:53:02.101255 containerd[1520]: time="2024-06-25T14:53:02.101192111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rstqt,Uid:e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3,Namespace:calico-system,Attempt:1,}" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.059 [INFO][4318] k8s.go 608: Cleaning up netns ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.059 [INFO][4318] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" iface="eth0" netns="/var/run/netns/cni-87a408d5-9e78-b6ea-2097-02e817cfd5a7" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.060 [INFO][4318] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" iface="eth0" netns="/var/run/netns/cni-87a408d5-9e78-b6ea-2097-02e817cfd5a7" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.060 [INFO][4318] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" iface="eth0" netns="/var/run/netns/cni-87a408d5-9e78-b6ea-2097-02e817cfd5a7" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.060 [INFO][4318] k8s.go 615: Releasing IP address(es) ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.060 [INFO][4318] utils.go 188: Calico CNI releasing IP address ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.137 [INFO][4341] ipam_plugin.go 411: Releasing address using handleID ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.140 [INFO][4341] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.141 [INFO][4341] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.167 [WARNING][4341] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.167 [INFO][4341] ipam_plugin.go 439: Releasing address using workloadID ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.171 [INFO][4341] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:02.178514 containerd[1520]: 2024-06-25 14:53:02.173 [INFO][4318] k8s.go 621: Teardown processing complete. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:02.181098 systemd[1]: run-netns-cni\x2d87a408d5\x2d9e78\x2db6ea\x2d2097\x2d02e817cfd5a7.mount: Deactivated successfully. Jun 25 14:53:02.182057 containerd[1520]: time="2024-06-25T14:53:02.181064433Z" level=info msg="TearDown network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\" successfully" Jun 25 14:53:02.182057 containerd[1520]: time="2024-06-25T14:53:02.181110194Z" level=info msg="StopPodSandbox for \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\" returns successfully" Jun 25 14:53:02.182640 containerd[1520]: time="2024-06-25T14:53:02.182590084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kptq5,Uid:452cade1-fc01-42b4-8e1a-60614efcd66d,Namespace:kube-system,Attempt:1,}" Jun 25 14:53:02.350811 systemd-networkd[1257]: cali8d119c2d156: Link UP Jun 25 14:53:02.358307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8d119c2d156: link becomes ready Jun 25 14:53:02.358676 systemd-networkd[1257]: cali8d119c2d156: Gained carrier Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.226 [INFO][4351] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0 csi-node-driver- calico-system e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3 708 0 2024-06-25 14:52:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815.2.4-a-39232a46a6 csi-node-driver-rstqt eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8d119c2d156 [] []}} ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.226 [INFO][4351] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.288 [INFO][4377] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" HandleID="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.302 [INFO][4377] ipam_plugin.go 264: Auto assigning IP ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" HandleID="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028e260), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-39232a46a6", "pod":"csi-node-driver-rstqt", "timestamp":"2024-06-25 14:53:02.28858299 +0000 UTC"}, Hostname:"ci-3815.2.4-a-39232a46a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.304 [INFO][4377] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.304 [INFO][4377] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.304 [INFO][4377] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-39232a46a6' Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.313 [INFO][4377] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.319 [INFO][4377] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.324 [INFO][4377] ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.326 [INFO][4377] ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.329 [INFO][4377] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.329 [INFO][4377] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.331 [INFO][4377] ipam.go 1685: Creating new handle: k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1 Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.335 [INFO][4377] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.342 [INFO][4377] ipam.go 1216: Successfully claimed IPs: [192.168.61.1/26] block=192.168.61.0/26 handle="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.342 [INFO][4377] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.1/26] handle="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.342 [INFO][4377] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:02.379930 containerd[1520]: 2024-06-25 14:53:02.342 [INFO][4377] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.61.1/26] IPv6=[] ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" HandleID="k8s-pod-network.13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.380590 containerd[1520]: 2024-06-25 14:53:02.344 [INFO][4351] k8s.go 386: Populated endpoint ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"", Pod:"csi-node-driver-rstqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d119c2d156", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:02.380590 containerd[1520]: 2024-06-25 14:53:02.344 [INFO][4351] k8s.go 387: Calico CNI using IPs: [192.168.61.1/32] ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.380590 containerd[1520]: 2024-06-25 14:53:02.344 [INFO][4351] dataplane_linux.go 68: Setting the host side veth name to cali8d119c2d156 ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.380590 containerd[1520]: 2024-06-25 14:53:02.358 [INFO][4351] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.380590 containerd[1520]: 2024-06-25 14:53:02.360 [INFO][4351] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1", Pod:"csi-node-driver-rstqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d119c2d156", MAC:"06:df:29:27:44:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:02.380590 containerd[1520]: 2024-06-25 14:53:02.377 [INFO][4351] k8s.go 500: Wrote updated endpoint to datastore ContainerID="13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1" Namespace="calico-system" Pod="csi-node-driver-rstqt" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:02.396000 audit[4415]: NETFILTER_CFG table=filter:104 family=2 entries=34 op=nft_register_chain pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:02.396000 audit[4415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffff7e54c0 a2=0 a3=ffff8f360fa8 items=0 ppid=4190 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.396000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:02.415766 containerd[1520]: time="2024-06-25T14:53:02.415607885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:53:02.416073 containerd[1520]: time="2024-06-25T14:53:02.416022088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:02.416166 containerd[1520]: time="2024-06-25T14:53:02.416072168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:53:02.416166 containerd[1520]: time="2024-06-25T14:53:02.416088008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:02.436349 systemd[1]: Started cri-containerd-13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1.scope - libcontainer container 13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1. Jun 25 14:53:02.441641 systemd-networkd[1257]: caliba7bf24efa7: Link UP Jun 25 14:53:02.452350 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliba7bf24efa7: link becomes ready Jun 25 14:53:02.452515 systemd-networkd[1257]: caliba7bf24efa7: Gained carrier Jun 25 14:53:02.458000 audit: BPF prog-id=169 op=LOAD Jun 25 14:53:02.459000 audit: BPF prog-id=170 op=LOAD Jun 25 14:53:02.459000 audit[4435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=4425 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633863636632383038336439323538313536323434363563366532 Jun 25 14:53:02.459000 audit: BPF prog-id=171 op=LOAD Jun 25 14:53:02.459000 audit[4435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=4425 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633863636632383038336439323538313536323434363563366532 Jun 25 14:53:02.459000 audit: BPF prog-id=171 op=UNLOAD Jun 25 14:53:02.459000 audit: BPF prog-id=170 op=UNLOAD Jun 25 14:53:02.460000 audit: BPF prog-id=172 op=LOAD Jun 25 14:53:02.460000 audit[4435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=4425 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133633863636632383038336439323538313536323434363563366532 Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.294 [INFO][4376] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0 coredns-76f75df574- kube-system 452cade1-fc01-42b4-8e1a-60614efcd66d 709 0 2024-06-25 14:52:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-39232a46a6 coredns-76f75df574-kptq5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba7bf24efa7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.294 [INFO][4376] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.342 [INFO][4396] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" HandleID="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.375 [INFO][4396] ipam_plugin.go 264: Auto assigning IP ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" HandleID="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc060), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-39232a46a6", "pod":"coredns-76f75df574-kptq5", "timestamp":"2024-06-25 14:53:02.342971013 +0000 UTC"}, Hostname:"ci-3815.2.4-a-39232a46a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.376 [INFO][4396] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.376 [INFO][4396] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.376 [INFO][4396] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-39232a46a6' Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.379 [INFO][4396] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.386 [INFO][4396] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.394 [INFO][4396] ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.397 [INFO][4396] ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.401 [INFO][4396] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.401 [INFO][4396] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.403 [INFO][4396] ipam.go 1685: Creating new handle: k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.408 [INFO][4396] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.418 [INFO][4396] ipam.go 1216: Successfully claimed IPs: [192.168.61.2/26] block=192.168.61.0/26 handle="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.418 [INFO][4396] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.2/26] handle="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.418 [INFO][4396] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:02.465012 containerd[1520]: 2024-06-25 14:53:02.418 [INFO][4396] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.61.2/26] IPv6=[] ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" HandleID="k8s-pod-network.9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.465691 containerd[1520]: 2024-06-25 14:53:02.421 [INFO][4376] k8s.go 386: Populated endpoint ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"452cade1-fc01-42b4-8e1a-60614efcd66d", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"", Pod:"coredns-76f75df574-kptq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba7bf24efa7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:02.465691 containerd[1520]: 2024-06-25 14:53:02.421 [INFO][4376] k8s.go 387: Calico CNI using IPs: [192.168.61.2/32] ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.465691 containerd[1520]: 2024-06-25 14:53:02.421 [INFO][4376] dataplane_linux.go 68: Setting the host side veth name to caliba7bf24efa7 ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.465691 containerd[1520]: 2024-06-25 14:53:02.452 [INFO][4376] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.465691 containerd[1520]: 2024-06-25 14:53:02.453 [INFO][4376] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"452cade1-fc01-42b4-8e1a-60614efcd66d", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad", Pod:"coredns-76f75df574-kptq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba7bf24efa7", MAC:"96:ce:79:d7:04:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:02.465691 containerd[1520]: 2024-06-25 14:53:02.462 [INFO][4376] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad" Namespace="kube-system" Pod="coredns-76f75df574-kptq5" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:02.484797 containerd[1520]: time="2024-06-25T14:53:02.484755571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rstqt,Uid:e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1\"" Jun 25 14:53:02.487412 containerd[1520]: time="2024-06-25T14:53:02.487366790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:53:02.496000 audit[4476]: NETFILTER_CFG table=filter:105 family=2 entries=38 op=nft_register_chain pid=4476 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:02.496000 audit[4476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20336 a0=3 a1=ffffc8c3e730 a2=0 a3=ffff9d23bfa8 items=0 ppid=4190 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.496000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:02.506713 containerd[1520]: time="2024-06-25T14:53:02.506535725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:53:02.506713 containerd[1520]: time="2024-06-25T14:53:02.506643686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:02.506713 containerd[1520]: time="2024-06-25T14:53:02.506666246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:53:02.506962 containerd[1520]: time="2024-06-25T14:53:02.506684606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:02.527482 systemd[1]: Started cri-containerd-9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad.scope - libcontainer container 9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad. Jun 25 14:53:02.536000 audit: BPF prog-id=173 op=LOAD Jun 25 14:53:02.536000 audit: BPF prog-id=174 op=LOAD Jun 25 14:53:02.536000 audit[4492]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4480 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393263396339303335396462646161633335663234646433346563 Jun 25 14:53:02.537000 audit: BPF prog-id=175 op=LOAD Jun 25 14:53:02.537000 audit[4492]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4480 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393263396339303335396462646161633335663234646433346563 Jun 25 14:53:02.537000 audit: BPF prog-id=175 op=UNLOAD Jun 25 14:53:02.537000 audit: BPF prog-id=174 op=UNLOAD Jun 25 14:53:02.537000 audit: BPF prog-id=176 op=LOAD Jun 25 14:53:02.537000 audit[4492]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=4480 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936393263396339303335396462646161633335663234646433346563 Jun 25 14:53:02.568950 containerd[1520]: time="2024-06-25T14:53:02.568897484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kptq5,Uid:452cade1-fc01-42b4-8e1a-60614efcd66d,Namespace:kube-system,Attempt:1,} returns sandbox id \"9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad\"" Jun 25 14:53:02.572164 containerd[1520]: time="2024-06-25T14:53:02.572120187Z" level=info msg="CreateContainer within sandbox \"9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:53:02.611975 containerd[1520]: time="2024-06-25T14:53:02.609713211Z" level=info msg="CreateContainer within sandbox \"9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c07bfe9560ea19acfd79a8dc9454c94fac8ba2acde371910611845800e1ff1fa\"" Jun 25 14:53:02.611975 containerd[1520]: time="2024-06-25T14:53:02.610581057Z" level=info msg="StartContainer for \"c07bfe9560ea19acfd79a8dc9454c94fac8ba2acde371910611845800e1ff1fa\"" Jun 25 14:53:02.636480 systemd[1]: Started cri-containerd-c07bfe9560ea19acfd79a8dc9454c94fac8ba2acde371910611845800e1ff1fa.scope - libcontainer container c07bfe9560ea19acfd79a8dc9454c94fac8ba2acde371910611845800e1ff1fa. Jun 25 14:53:02.645000 audit: BPF prog-id=177 op=LOAD Jun 25 14:53:02.646000 audit: BPF prog-id=178 op=LOAD Jun 25 14:53:02.646000 audit[4524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4480 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330376266653935363065613139616366643739613864633934353463 Jun 25 14:53:02.646000 audit: BPF prog-id=179 op=LOAD Jun 25 14:53:02.646000 audit[4524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4480 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330376266653935363065613139616366643739613864633934353463 Jun 25 14:53:02.646000 audit: BPF prog-id=179 op=UNLOAD Jun 25 14:53:02.646000 audit: BPF prog-id=178 op=UNLOAD Jun 25 14:53:02.646000 audit: BPF prog-id=180 op=LOAD Jun 25 14:53:02.646000 audit[4524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4480 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:02.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330376266653935363065613139616366643739613864633934353463 Jun 25 14:53:02.665916 containerd[1520]: time="2024-06-25T14:53:02.665864647Z" level=info msg="StartContainer for \"c07bfe9560ea19acfd79a8dc9454c94fac8ba2acde371910611845800e1ff1fa\" returns successfully" Jun 25 14:53:03.147918 kubelet[2883]: I0625 14:53:03.147867 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kptq5" podStartSLOduration=32.147816868 podStartE2EDuration="32.147816868s" podCreationTimestamp="2024-06-25 14:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:53:03.145653333 +0000 UTC m=+45.328301528" watchObservedRunningTime="2024-06-25 14:53:03.147816868 +0000 UTC m=+45.330465103" Jun 25 14:53:03.169000 audit[4557]: NETFILTER_CFG table=filter:106 family=2 entries=14 op=nft_register_rule pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:03.169000 audit[4557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffc512b0f0 a2=0 a3=1 items=0 ppid=3023 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:03.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:03.170000 audit[4557]: NETFILTER_CFG table=nat:107 family=2 entries=14 op=nft_register_rule pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:03.170000 audit[4557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc512b0f0 a2=0 a3=1 items=0 ppid=3023 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:03.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:03.180000 audit[4559]: NETFILTER_CFG table=filter:108 family=2 entries=11 op=nft_register_rule pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:03.180000 audit[4559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffcbec89f0 a2=0 a3=1 items=0 ppid=3023 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:03.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:03.182000 audit[4559]: NETFILTER_CFG table=nat:109 family=2 entries=35 op=nft_register_chain pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:03.182000 audit[4559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffcbec89f0 a2=0 a3=1 items=0 ppid=3023 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:03.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:03.370517 systemd-networkd[1257]: vxlan.calico: Gained IPv6LL Jun 25 14:53:03.764150 containerd[1520]: time="2024-06-25T14:53:03.764103238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:03.766494 containerd[1520]: time="2024-06-25T14:53:03.766448614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:53:03.770952 containerd[1520]: time="2024-06-25T14:53:03.770909965Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:03.775426 containerd[1520]: time="2024-06-25T14:53:03.775381836Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:03.779162 containerd[1520]: time="2024-06-25T14:53:03.779115542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:03.779905 containerd[1520]: time="2024-06-25T14:53:03.779859547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.292227035s" Jun 25 14:53:03.779989 containerd[1520]: time="2024-06-25T14:53:03.779906628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:53:03.783310 containerd[1520]: time="2024-06-25T14:53:03.782860768Z" level=info msg="CreateContainer within sandbox \"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:53:03.807485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231261781.mount: Deactivated successfully. Jun 25 14:53:03.823349 containerd[1520]: time="2024-06-25T14:53:03.823290210Z" level=info msg="CreateContainer within sandbox \"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"efa219aa8608aeac5b30bd28d10ee79c47523a58c54ba3c4311f37138e0615ba\"" Jun 25 14:53:03.824215 containerd[1520]: time="2024-06-25T14:53:03.824172816Z" level=info msg="StartContainer for \"efa219aa8608aeac5b30bd28d10ee79c47523a58c54ba3c4311f37138e0615ba\"" Jun 25 14:53:03.852476 systemd[1]: Started cri-containerd-efa219aa8608aeac5b30bd28d10ee79c47523a58c54ba3c4311f37138e0615ba.scope - libcontainer container efa219aa8608aeac5b30bd28d10ee79c47523a58c54ba3c4311f37138e0615ba. Jun 25 14:53:03.870000 audit: BPF prog-id=181 op=LOAD Jun 25 14:53:03.870000 audit[4577]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=4425 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:03.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613231396161383630386165616335623330626432386431306565 Jun 25 14:53:03.870000 audit: BPF prog-id=182 op=LOAD Jun 25 14:53:03.870000 audit[4577]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=4425 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:03.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613231396161383630386165616335623330626432386431306565 Jun 25 14:53:03.871000 audit: BPF prog-id=182 op=UNLOAD Jun 25 14:53:03.871000 audit: BPF prog-id=181 op=UNLOAD Jun 25 14:53:03.871000 audit: BPF prog-id=183 op=LOAD Jun 25 14:53:03.871000 audit[4577]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=4425 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:03.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566613231396161383630386165616335623330626432386431306565 Jun 25 14:53:03.893922 containerd[1520]: time="2024-06-25T14:53:03.893838741Z" level=info msg="StartContainer for \"efa219aa8608aeac5b30bd28d10ee79c47523a58c54ba3c4311f37138e0615ba\" returns successfully" Jun 25 14:53:03.896114 containerd[1520]: time="2024-06-25T14:53:03.896057116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:53:03.940780 containerd[1520]: time="2024-06-25T14:53:03.940738427Z" level=info msg="StopPodSandbox for \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\"" Jun 25 14:53:03.943324 containerd[1520]: time="2024-06-25T14:53:03.943283085Z" level=info msg="StopPodSandbox for \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\"" Jun 25 14:53:03.946845 systemd-networkd[1257]: cali8d119c2d156: Gained IPv6LL Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:03.998 [INFO][4630] k8s.go 608: Cleaning up netns ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:03.998 [INFO][4630] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" iface="eth0" netns="/var/run/netns/cni-2c3e1877-1d80-3799-30c2-15205f2c1a1c" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:03.999 [INFO][4630] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" iface="eth0" netns="/var/run/netns/cni-2c3e1877-1d80-3799-30c2-15205f2c1a1c" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.001 [INFO][4630] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" iface="eth0" netns="/var/run/netns/cni-2c3e1877-1d80-3799-30c2-15205f2c1a1c" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.001 [INFO][4630] k8s.go 615: Releasing IP address(es) ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.001 [INFO][4630] utils.go 188: Calico CNI releasing IP address ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.041 [INFO][4647] ipam_plugin.go 411: Releasing address using handleID ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.041 [INFO][4647] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.042 [INFO][4647] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.051 [WARNING][4647] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.051 [INFO][4647] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.053 [INFO][4647] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:04.055354 containerd[1520]: 2024-06-25 14:53:04.054 [INFO][4630] k8s.go 621: Teardown processing complete. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:04.055815 containerd[1520]: time="2024-06-25T14:53:04.055642623Z" level=info msg="TearDown network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\" successfully" Jun 25 14:53:04.055815 containerd[1520]: time="2024-06-25T14:53:04.055693303Z" level=info msg="StopPodSandbox for \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\" returns successfully" Jun 25 14:53:04.056591 containerd[1520]: time="2024-06-25T14:53:04.056536789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m6gb7,Uid:60c4be3a-a62f-44dc-95b7-ebe069fe4d27,Namespace:kube-system,Attempt:1,}" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.021 [INFO][4640] k8s.go 608: Cleaning up netns ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.021 [INFO][4640] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" iface="eth0" netns="/var/run/netns/cni-9ce7e202-c91b-984b-10c4-de3098d49bca" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.021 [INFO][4640] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" iface="eth0" netns="/var/run/netns/cni-9ce7e202-c91b-984b-10c4-de3098d49bca" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.021 [INFO][4640] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" iface="eth0" netns="/var/run/netns/cni-9ce7e202-c91b-984b-10c4-de3098d49bca" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.021 [INFO][4640] k8s.go 615: Releasing IP address(es) ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.021 [INFO][4640] utils.go 188: Calico CNI releasing IP address ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.059 [INFO][4652] ipam_plugin.go 411: Releasing address using handleID ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.060 [INFO][4652] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.060 [INFO][4652] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.071 [WARNING][4652] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.071 [INFO][4652] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.074 [INFO][4652] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:04.079615 containerd[1520]: 2024-06-25 14:53:04.076 [INFO][4640] k8s.go 621: Teardown processing complete. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:04.080351 containerd[1520]: time="2024-06-25T14:53:04.080304353Z" level=info msg="TearDown network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\" successfully" Jun 25 14:53:04.080456 containerd[1520]: time="2024-06-25T14:53:04.080438833Z" level=info msg="StopPodSandbox for \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\" returns successfully" Jun 25 14:53:04.081300 containerd[1520]: time="2024-06-25T14:53:04.081201439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fd654d6cc-fn6hn,Uid:a00cdd34-fa68-4d2a-acef-128a84544a34,Namespace:calico-system,Attempt:1,}" Jun 25 14:53:04.097756 systemd[1]: run-netns-cni\x2d2c3e1877\x2d1d80\x2d3799\x2d30c2\x2d15205f2c1a1c.mount: Deactivated successfully. Jun 25 14:53:04.097856 systemd[1]: run-netns-cni\x2d9ce7e202\x2dc91b\x2d984b\x2d10c4\x2dde3098d49bca.mount: Deactivated successfully. Jun 25 14:53:04.203829 systemd-networkd[1257]: caliba7bf24efa7: Gained IPv6LL Jun 25 14:53:04.263666 systemd-networkd[1257]: cali5edd99dce4c: Link UP Jun 25 14:53:04.275563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:53:04.275707 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5edd99dce4c: link becomes ready Jun 25 14:53:04.277081 systemd-networkd[1257]: cali5edd99dce4c: Gained carrier Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.149 [INFO][4659] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0 coredns-76f75df574- kube-system 60c4be3a-a62f-44dc-95b7-ebe069fe4d27 737 0 2024-06-25 14:52:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-39232a46a6 coredns-76f75df574-m6gb7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5edd99dce4c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.149 [INFO][4659] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.195 [INFO][4684] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" HandleID="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.208 [INFO][4684] ipam_plugin.go 264: Auto assigning IP ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" HandleID="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000301b70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-39232a46a6", "pod":"coredns-76f75df574-m6gb7", "timestamp":"2024-06-25 14:53:04.195470665 +0000 UTC"}, Hostname:"ci-3815.2.4-a-39232a46a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.208 [INFO][4684] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.208 [INFO][4684] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.209 [INFO][4684] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-39232a46a6' Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.211 [INFO][4684] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.222 [INFO][4684] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.227 [INFO][4684] ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.230 [INFO][4684] ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.233 [INFO][4684] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.233 [INFO][4684] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.235 [INFO][4684] ipam.go 1685: Creating new handle: k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69 Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.241 [INFO][4684] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.248 [INFO][4684] ipam.go 1216: Successfully claimed IPs: [192.168.61.3/26] block=192.168.61.0/26 handle="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.248 [INFO][4684] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.3/26] handle="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.248 [INFO][4684] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:04.293672 containerd[1520]: 2024-06-25 14:53:04.248 [INFO][4684] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.61.3/26] IPv6=[] ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" HandleID="k8s-pod-network.082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.294419 containerd[1520]: 2024-06-25 14:53:04.250 [INFO][4659] k8s.go 386: Populated endpoint ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"60c4be3a-a62f-44dc-95b7-ebe069fe4d27", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"", Pod:"coredns-76f75df574-m6gb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5edd99dce4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:04.294419 containerd[1520]: 2024-06-25 14:53:04.250 [INFO][4659] k8s.go 387: Calico CNI using IPs: [192.168.61.3/32] ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.294419 containerd[1520]: 2024-06-25 14:53:04.250 [INFO][4659] dataplane_linux.go 68: Setting the host side veth name to cali5edd99dce4c ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.294419 containerd[1520]: 2024-06-25 14:53:04.279 [INFO][4659] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.294419 containerd[1520]: 2024-06-25 14:53:04.280 [INFO][4659] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"60c4be3a-a62f-44dc-95b7-ebe069fe4d27", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69", Pod:"coredns-76f75df574-m6gb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5edd99dce4c", MAC:"f2:21:a9:1d:9b:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:04.294419 containerd[1520]: 2024-06-25 14:53:04.292 [INFO][4659] k8s.go 500: Wrote updated endpoint to datastore ContainerID="082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69" Namespace="kube-system" Pod="coredns-76f75df574-m6gb7" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:04.323315 systemd-networkd[1257]: cali7610b94d6ad: Link UP Jun 25 14:53:04.328514 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7610b94d6ad: link becomes ready Jun 25 14:53:04.328143 systemd-networkd[1257]: cali7610b94d6ad: Gained carrier Jun 25 14:53:04.329000 audit[4713]: NETFILTER_CFG table=filter:110 family=2 entries=34 op=nft_register_chain pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:04.329000 audit[4713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18220 a0=3 a1=ffffca588920 a2=0 a3=ffffa1a9ffa8 items=0 ppid=4190 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.329000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.187 [INFO][4673] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0 calico-kube-controllers-5fd654d6cc- calico-system a00cdd34-fa68-4d2a-acef-128a84544a34 738 0 2024-06-25 14:52:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fd654d6cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815.2.4-a-39232a46a6 calico-kube-controllers-5fd654d6cc-fn6hn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7610b94d6ad [] []}} ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.187 [INFO][4673] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.228 [INFO][4692] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" HandleID="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.247 [INFO][4692] ipam_plugin.go 264: Auto assigning IP ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" HandleID="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-39232a46a6", "pod":"calico-kube-controllers-5fd654d6cc-fn6hn", "timestamp":"2024-06-25 14:53:04.22808081 +0000 UTC"}, Hostname:"ci-3815.2.4-a-39232a46a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.247 [INFO][4692] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.248 [INFO][4692] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.248 [INFO][4692] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-39232a46a6' Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.250 [INFO][4692] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.276 [INFO][4692] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.281 [INFO][4692] ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.284 [INFO][4692] ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.287 [INFO][4692] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.287 [INFO][4692] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.289 [INFO][4692] ipam.go 1685: Creating new handle: k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554 Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.297 [INFO][4692] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.303 [INFO][4692] ipam.go 1216: Successfully claimed IPs: [192.168.61.4/26] block=192.168.61.0/26 handle="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.303 [INFO][4692] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.4/26] handle="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.303 [INFO][4692] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:04.341652 containerd[1520]: 2024-06-25 14:53:04.303 [INFO][4692] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.61.4/26] IPv6=[] ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" HandleID="k8s-pod-network.3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.342292 containerd[1520]: 2024-06-25 14:53:04.306 [INFO][4673] k8s.go 386: Populated endpoint ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0", GenerateName:"calico-kube-controllers-5fd654d6cc-", Namespace:"calico-system", SelfLink:"", UID:"a00cdd34-fa68-4d2a-acef-128a84544a34", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fd654d6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"", Pod:"calico-kube-controllers-5fd654d6cc-fn6hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7610b94d6ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:04.342292 containerd[1520]: 2024-06-25 14:53:04.306 [INFO][4673] k8s.go 387: Calico CNI using IPs: [192.168.61.4/32] ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.342292 containerd[1520]: 2024-06-25 14:53:04.306 [INFO][4673] dataplane_linux.go 68: Setting the host side veth name to cali7610b94d6ad ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.342292 containerd[1520]: 2024-06-25 14:53:04.328 [INFO][4673] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.342292 containerd[1520]: 2024-06-25 14:53:04.329 [INFO][4673] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0", GenerateName:"calico-kube-controllers-5fd654d6cc-", Namespace:"calico-system", SelfLink:"", UID:"a00cdd34-fa68-4d2a-acef-128a84544a34", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fd654d6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554", Pod:"calico-kube-controllers-5fd654d6cc-fn6hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7610b94d6ad", MAC:"d2:b8:c9:88:a8:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:04.342292 containerd[1520]: 2024-06-25 14:53:04.340 [INFO][4673] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554" Namespace="calico-system" Pod="calico-kube-controllers-5fd654d6cc-fn6hn" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:04.356508 containerd[1520]: time="2024-06-25T14:53:04.355896449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:53:04.356508 containerd[1520]: time="2024-06-25T14:53:04.355953130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:04.356508 containerd[1520]: time="2024-06-25T14:53:04.355967170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:53:04.356508 containerd[1520]: time="2024-06-25T14:53:04.355976490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:04.365000 audit[4743]: NETFILTER_CFG table=filter:111 family=2 entries=42 op=nft_register_chain pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:04.365000 audit[4743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21016 a0=3 a1=ffffc9706820 a2=0 a3=ffff89f0dfa8 items=0 ppid=4190 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.365000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:04.379490 systemd[1]: Started cri-containerd-082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69.scope - libcontainer container 082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69. Jun 25 14:53:04.394000 audit: BPF prog-id=184 op=LOAD Jun 25 14:53:04.395000 audit: BPF prog-id=185 op=LOAD Jun 25 14:53:04.395000 audit[4742]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4728 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038326537643931646432383164363138633439636164313438313139 Jun 25 14:53:04.395000 audit: BPF prog-id=186 op=LOAD Jun 25 14:53:04.395000 audit[4742]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4728 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038326537643931646432383164363138633439636164313438313139 Jun 25 14:53:04.395000 audit: BPF prog-id=186 op=UNLOAD Jun 25 14:53:04.395000 audit: BPF prog-id=185 op=UNLOAD Jun 25 14:53:04.395000 audit: BPF prog-id=187 op=LOAD Jun 25 14:53:04.395000 audit[4742]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4728 pid=4742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038326537643931646432383164363138633439636164313438313139 Jun 25 14:53:04.401975 containerd[1520]: time="2024-06-25T14:53:04.401817365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:53:04.402925 containerd[1520]: time="2024-06-25T14:53:04.402684891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:04.402925 containerd[1520]: time="2024-06-25T14:53:04.402747692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:53:04.402925 containerd[1520]: time="2024-06-25T14:53:04.402763972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:04.425482 systemd[1]: Started cri-containerd-3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554.scope - libcontainer container 3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554. Jun 25 14:53:04.431046 containerd[1520]: time="2024-06-25T14:53:04.430650884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m6gb7,Uid:60c4be3a-a62f-44dc-95b7-ebe069fe4d27,Namespace:kube-system,Attempt:1,} returns sandbox id \"082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69\"" Jun 25 14:53:04.436414 containerd[1520]: time="2024-06-25T14:53:04.436319883Z" level=info msg="CreateContainer within sandbox \"082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:53:04.442000 audit: BPF prog-id=188 op=LOAD Jun 25 14:53:04.442000 audit: BPF prog-id=189 op=LOAD Jun 25 14:53:04.442000 audit[4778]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=4768 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364373334323234303539376232346462646335653935343866616466 Jun 25 14:53:04.442000 audit: BPF prog-id=190 op=LOAD Jun 25 14:53:04.442000 audit[4778]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=4768 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364373334323234303539376232346462646335653935343866616466 Jun 25 14:53:04.443000 audit: BPF prog-id=190 op=UNLOAD Jun 25 14:53:04.443000 audit: BPF prog-id=189 op=UNLOAD Jun 25 14:53:04.443000 audit: BPF prog-id=191 op=LOAD Jun 25 14:53:04.443000 audit[4778]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=4768 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364373334323234303539376232346462646335653935343866616466 Jun 25 14:53:04.467313 containerd[1520]: time="2024-06-25T14:53:04.467269536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fd654d6cc-fn6hn,Uid:a00cdd34-fa68-4d2a-acef-128a84544a34,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554\"" Jun 25 14:53:04.475455 containerd[1520]: time="2024-06-25T14:53:04.475403232Z" level=info msg="CreateContainer within sandbox \"082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"258f1032578fca4379c9dbead2c282278e662bb5238dbc94d93bde6b6903e59f\"" Jun 25 14:53:04.476279 containerd[1520]: time="2024-06-25T14:53:04.476147837Z" level=info msg="StartContainer for \"258f1032578fca4379c9dbead2c282278e662bb5238dbc94d93bde6b6903e59f\"" Jun 25 14:53:04.502465 systemd[1]: Started cri-containerd-258f1032578fca4379c9dbead2c282278e662bb5238dbc94d93bde6b6903e59f.scope - libcontainer container 258f1032578fca4379c9dbead2c282278e662bb5238dbc94d93bde6b6903e59f. Jun 25 14:53:04.511000 audit: BPF prog-id=192 op=LOAD Jun 25 14:53:04.512000 audit: BPF prog-id=193 op=LOAD Jun 25 14:53:04.512000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=4728 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235386631303332353738666361343337396339646265616432633238 Jun 25 14:53:04.513000 audit: BPF prog-id=194 op=LOAD Jun 25 14:53:04.513000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=4728 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235386631303332353738666361343337396339646265616432633238 Jun 25 14:53:04.513000 audit: BPF prog-id=194 op=UNLOAD Jun 25 14:53:04.513000 audit: BPF prog-id=193 op=UNLOAD Jun 25 14:53:04.513000 audit: BPF prog-id=195 op=LOAD Jun 25 14:53:04.513000 audit[4816]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=4728 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:04.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235386631303332353738666361343337396339646265616432633238 Jun 25 14:53:04.534875 containerd[1520]: time="2024-06-25T14:53:04.534804681Z" level=info msg="StartContainer for \"258f1032578fca4379c9dbead2c282278e662bb5238dbc94d93bde6b6903e59f\" returns successfully" Jun 25 14:53:05.165327 kubelet[2883]: I0625 14:53:05.165063 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m6gb7" podStartSLOduration=34.165002766 podStartE2EDuration="34.165002766s" podCreationTimestamp="2024-06-25 14:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:53:05.164306081 +0000 UTC m=+47.346954316" watchObservedRunningTime="2024-06-25 14:53:05.165002766 +0000 UTC m=+47.347651001" Jun 25 14:53:05.218000 audit[4850]: NETFILTER_CFG table=filter:112 family=2 entries=8 op=nft_register_rule pid=4850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:05.218000 audit[4850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff02153d0 a2=0 a3=1 items=0 ppid=3023 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:05.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:05.233000 audit[4850]: NETFILTER_CFG table=nat:113 family=2 entries=44 op=nft_register_rule pid=4850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:05.233000 audit[4850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff02153d0 a2=0 a3=1 items=0 ppid=3023 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:05.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:05.258000 audit[4852]: NETFILTER_CFG table=filter:114 family=2 entries=8 op=nft_register_rule pid=4852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:05.258000 audit[4852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd9cb2450 a2=0 a3=1 items=0 ppid=3023 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:05.258000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:05.266000 audit[4852]: NETFILTER_CFG table=nat:115 family=2 entries=56 op=nft_register_chain pid=4852 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:05.266000 audit[4852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd9cb2450 a2=0 a3=1 items=0 ppid=3023 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:05.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:05.314992 containerd[1520]: time="2024-06-25T14:53:05.314940907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:05.317697 containerd[1520]: time="2024-06-25T14:53:05.317626285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:53:05.321222 containerd[1520]: time="2024-06-25T14:53:05.321168509Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:05.324703 containerd[1520]: time="2024-06-25T14:53:05.324653533Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:05.328053 containerd[1520]: time="2024-06-25T14:53:05.328002796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:05.328627 containerd[1520]: time="2024-06-25T14:53:05.328577559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.43209824s" Jun 25 14:53:05.328627 containerd[1520]: time="2024-06-25T14:53:05.328625320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:53:05.330055 containerd[1520]: time="2024-06-25T14:53:05.330011489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:53:05.331821 containerd[1520]: time="2024-06-25T14:53:05.331761061Z" level=info msg="CreateContainer within sandbox \"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:53:05.362889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442449344.mount: Deactivated successfully. Jun 25 14:53:05.383625 containerd[1520]: time="2024-06-25T14:53:05.383552614Z" level=info msg="CreateContainer within sandbox \"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8ee927227f5207b2dfe547c97aab88a763359adb661549ac0c2d3d120611149f\"" Jun 25 14:53:05.384765 containerd[1520]: time="2024-06-25T14:53:05.384716662Z" level=info msg="StartContainer for \"8ee927227f5207b2dfe547c97aab88a763359adb661549ac0c2d3d120611149f\"" Jun 25 14:53:05.420476 systemd[1]: Started cri-containerd-8ee927227f5207b2dfe547c97aab88a763359adb661549ac0c2d3d120611149f.scope - libcontainer container 8ee927227f5207b2dfe547c97aab88a763359adb661549ac0c2d3d120611149f. Jun 25 14:53:05.432000 audit: BPF prog-id=196 op=LOAD Jun 25 14:53:05.432000 audit[4863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4425 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:05.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653932373232376635323037623264666535343763393761616238 Jun 25 14:53:05.433000 audit: BPF prog-id=197 op=LOAD Jun 25 14:53:05.433000 audit[4863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4425 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:05.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653932373232376635323037623264666535343763393761616238 Jun 25 14:53:05.433000 audit: BPF prog-id=197 op=UNLOAD Jun 25 14:53:05.433000 audit: BPF prog-id=196 op=UNLOAD Jun 25 14:53:05.433000 audit: BPF prog-id=198 op=LOAD Jun 25 14:53:05.433000 audit[4863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4425 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:05.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653932373232376635323037623264666535343763393761616238 Jun 25 14:53:05.453736 containerd[1520]: time="2024-06-25T14:53:05.453660331Z" level=info msg="StartContainer for \"8ee927227f5207b2dfe547c97aab88a763359adb661549ac0c2d3d120611149f\" returns successfully" Jun 25 14:53:05.802425 systemd-networkd[1257]: cali5edd99dce4c: Gained IPv6LL Jun 25 14:53:05.930419 systemd-networkd[1257]: cali7610b94d6ad: Gained IPv6LL Jun 25 14:53:06.037263 kubelet[2883]: I0625 14:53:06.037175 2883 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:53:06.037263 kubelet[2883]: I0625 14:53:06.037254 2883 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:53:06.096704 systemd[1]: run-containerd-runc-k8s.io-8ee927227f5207b2dfe547c97aab88a763359adb661549ac0c2d3d120611149f-runc.1SA4vf.mount: Deactivated successfully. Jun 25 14:53:06.177017 kubelet[2883]: I0625 14:53:06.176943 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-rstqt" podStartSLOduration=25.334461584 podStartE2EDuration="28.176891202s" podCreationTimestamp="2024-06-25 14:52:38 +0000 UTC" firstStartedPulling="2024-06-25 14:53:02.486567104 +0000 UTC m=+44.669215339" lastFinishedPulling="2024-06-25 14:53:05.328996722 +0000 UTC m=+47.511644957" observedRunningTime="2024-06-25 14:53:06.168449225 +0000 UTC m=+48.351097460" watchObservedRunningTime="2024-06-25 14:53:06.176891202 +0000 UTC m=+48.359539437" Jun 25 14:53:08.279136 containerd[1520]: time="2024-06-25T14:53:08.279073490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:08.281701 containerd[1520]: time="2024-06-25T14:53:08.281645986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:53:08.287874 containerd[1520]: time="2024-06-25T14:53:08.287826627Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:08.292871 containerd[1520]: time="2024-06-25T14:53:08.292825780Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:08.298380 containerd[1520]: time="2024-06-25T14:53:08.298334337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:08.300214 containerd[1520]: time="2024-06-25T14:53:08.300153669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 2.969937778s" Jun 25 14:53:08.300691 containerd[1520]: time="2024-06-25T14:53:08.300579351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:53:08.325610 containerd[1520]: time="2024-06-25T14:53:08.325383275Z" level=info msg="CreateContainer within sandbox \"3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:53:08.368723 containerd[1520]: time="2024-06-25T14:53:08.368648480Z" level=info msg="CreateContainer within sandbox \"3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461\"" Jun 25 14:53:08.369553 containerd[1520]: time="2024-06-25T14:53:08.369514366Z" level=info msg="StartContainer for \"01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461\"" Jun 25 14:53:08.398586 systemd[1]: Started cri-containerd-01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461.scope - libcontainer container 01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461. Jun 25 14:53:08.428796 kernel: kauditd_printk_skb: 154 callbacks suppressed Jun 25 14:53:08.428936 kernel: audit: type=1334 audit(1719327188.417:552): prog-id=199 op=LOAD Jun 25 14:53:08.417000 audit: BPF prog-id=199 op=LOAD Jun 25 14:53:08.417000 audit: BPF prog-id=200 op=LOAD Jun 25 14:53:08.434473 kernel: audit: type=1334 audit(1719327188.417:553): prog-id=200 op=LOAD Jun 25 14:53:08.417000 audit[4919]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4768 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:08.458732 kernel: audit: type=1300 audit(1719327188.417:553): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4768 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:08.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626337336136356334323261386137303566643634303764376266 Jun 25 14:53:08.483049 kernel: audit: type=1327 audit(1719327188.417:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626337336136356334323261386137303566643634303764376266 Jun 25 14:53:08.421000 audit: BPF prog-id=201 op=LOAD Jun 25 14:53:08.490719 kernel: audit: type=1334 audit(1719327188.421:554): prog-id=201 op=LOAD Jun 25 14:53:08.421000 audit[4919]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4768 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:08.518903 kernel: audit: type=1300 audit(1719327188.421:554): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4768 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:08.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626337336136356334323261386137303566643634303764376266 Jun 25 14:53:08.542894 kernel: audit: type=1327 audit(1719327188.421:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626337336136356334323261386137303566643634303764376266 Jun 25 14:53:08.421000 audit: BPF prog-id=201 op=UNLOAD Jun 25 14:53:08.548495 containerd[1520]: time="2024-06-25T14:53:08.548436386Z" level=info msg="StartContainer for \"01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461\" returns successfully" Jun 25 14:53:08.550461 kernel: audit: type=1334 audit(1719327188.421:555): prog-id=201 op=UNLOAD Jun 25 14:53:08.422000 audit: BPF prog-id=200 op=UNLOAD Jun 25 14:53:08.556844 kernel: audit: type=1334 audit(1719327188.422:556): prog-id=200 op=UNLOAD Jun 25 14:53:08.422000 audit: BPF prog-id=202 op=LOAD Jun 25 14:53:08.563044 kernel: audit: type=1334 audit(1719327188.422:557): prog-id=202 op=LOAD Jun 25 14:53:08.422000 audit[4919]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4768 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:08.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031626337336136356334323261386137303566643634303764376266 Jun 25 14:53:09.321377 kubelet[2883]: I0625 14:53:09.321327 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fd654d6cc-fn6hn" podStartSLOduration=27.488879694 podStartE2EDuration="31.321186462s" podCreationTimestamp="2024-06-25 14:52:38 +0000 UTC" firstStartedPulling="2024-06-25 14:53:04.468713106 +0000 UTC m=+46.651361341" lastFinishedPulling="2024-06-25 14:53:08.301019914 +0000 UTC m=+50.483668109" observedRunningTime="2024-06-25 14:53:09.272257182 +0000 UTC m=+51.454905417" watchObservedRunningTime="2024-06-25 14:53:09.321186462 +0000 UTC m=+51.503834697" Jun 25 14:53:14.405000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.410416 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 14:53:14.410567 kernel: audit: type=1400 audit(1719327194.405:558): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.405000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=4005d282d0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.455599 kernel: audit: type=1300 audit(1719327194.405:558): arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=4005d282d0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.405000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.477643 kernel: audit: type=1327 audit(1719327194.405:558): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.425000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.500437 kernel: audit: type=1400 audit(1719327194.425:559): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.425000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=4007244030 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.527362 kernel: audit: type=1300 audit(1719327194.425:559): arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=4007244030 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.425000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.550553 kernel: audit: type=1327 audit(1719327194.425:559): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.440000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.570676 kernel: audit: type=1400 audit(1719327194.440:560): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.440000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400eb90be0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.595792 kernel: audit: type=1300 audit(1719327194.440:560): arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400eb90be0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.440000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.618098 kernel: audit: type=1327 audit(1719327194.440:560): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.516000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.638658 kernel: audit: type=1400 audit(1719327194.516:561): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.516000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400642c240 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.516000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.554000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.554000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001a32ae0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:14.554000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:14.555000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.555000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000c63600 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:14.555000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:14.570000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.570000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=400603c2d0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.570000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:14.571000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:14.571000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=68 a1=4009aabf60 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:53:14.571000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:53:17.926125 containerd[1520]: time="2024-06-25T14:53:17.926067448Z" level=info msg="StopPodSandbox for \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\"" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.972 [WARNING][4993] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"60c4be3a-a62f-44dc-95b7-ebe069fe4d27", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69", Pod:"coredns-76f75df574-m6gb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5edd99dce4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.973 [INFO][4993] k8s.go 608: Cleaning up netns ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.973 [INFO][4993] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" iface="eth0" netns="" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.973 [INFO][4993] k8s.go 615: Releasing IP address(es) ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.973 [INFO][4993] utils.go 188: Calico CNI releasing IP address ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.997 [INFO][5001] ipam_plugin.go 411: Releasing address using handleID ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.998 [INFO][5001] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:17.998 [INFO][5001] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:18.012 [WARNING][5001] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:18.012 [INFO][5001] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:18.014 [INFO][5001] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.018073 containerd[1520]: 2024-06-25 14:53:18.016 [INFO][4993] k8s.go 621: Teardown processing complete. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.018871 containerd[1520]: time="2024-06-25T14:53:18.018833010Z" level=info msg="TearDown network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\" successfully" Jun 25 14:53:18.018960 containerd[1520]: time="2024-06-25T14:53:18.018942971Z" level=info msg="StopPodSandbox for \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\" returns successfully" Jun 25 14:53:18.019707 containerd[1520]: time="2024-06-25T14:53:18.019677375Z" level=info msg="RemovePodSandbox for \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\"" Jun 25 14:53:18.025151 containerd[1520]: time="2024-06-25T14:53:18.019882216Z" level=info msg="Forcibly stopping sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\"" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.062 [WARNING][5020] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"60c4be3a-a62f-44dc-95b7-ebe069fe4d27", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"082e7d91dd281d618c49cad148119773dc6b82441bd4d8ad5279a7e4be44eb69", Pod:"coredns-76f75df574-m6gb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5edd99dce4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.062 [INFO][5020] k8s.go 608: Cleaning up netns ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.062 [INFO][5020] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" iface="eth0" netns="" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.062 [INFO][5020] k8s.go 615: Releasing IP address(es) ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.062 [INFO][5020] utils.go 188: Calico CNI releasing IP address ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.083 [INFO][5027] ipam_plugin.go 411: Releasing address using handleID ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.084 [INFO][5027] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.086 [INFO][5027] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.097 [WARNING][5027] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.097 [INFO][5027] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" HandleID="k8s-pod-network.f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--m6gb7-eth0" Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.099 [INFO][5027] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.103066 containerd[1520]: 2024-06-25 14:53:18.101 [INFO][5020] k8s.go 621: Teardown processing complete. ContainerID="f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3" Jun 25 14:53:18.103672 containerd[1520]: time="2024-06-25T14:53:18.103635120Z" level=info msg="TearDown network for sandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\" successfully" Jun 25 14:53:18.113748 containerd[1520]: time="2024-06-25T14:53:18.113699261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:53:18.113990 containerd[1520]: time="2024-06-25T14:53:18.113965983Z" level=info msg="RemovePodSandbox \"f8b46802e55ca3c25d8f6365fd6a08b0f9d673e1658c309138cdc114a4176eb3\" returns successfully" Jun 25 14:53:18.114741 containerd[1520]: time="2024-06-25T14:53:18.114713907Z" level=info msg="StopPodSandbox for \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\"" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.156 [WARNING][5045] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0", GenerateName:"calico-kube-controllers-5fd654d6cc-", Namespace:"calico-system", SelfLink:"", UID:"a00cdd34-fa68-4d2a-acef-128a84544a34", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fd654d6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554", Pod:"calico-kube-controllers-5fd654d6cc-fn6hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7610b94d6ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.157 [INFO][5045] k8s.go 608: Cleaning up netns ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.157 [INFO][5045] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" iface="eth0" netns="" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.157 [INFO][5045] k8s.go 615: Releasing IP address(es) ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.157 [INFO][5045] utils.go 188: Calico CNI releasing IP address ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.177 [INFO][5051] ipam_plugin.go 411: Releasing address using handleID ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.178 [INFO][5051] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.178 [INFO][5051] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.186 [WARNING][5051] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.186 [INFO][5051] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.188 [INFO][5051] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.191608 containerd[1520]: 2024-06-25 14:53:18.189 [INFO][5045] k8s.go 621: Teardown processing complete. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.192309 containerd[1520]: time="2024-06-25T14:53:18.191576010Z" level=info msg="TearDown network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\" successfully" Jun 25 14:53:18.192441 containerd[1520]: time="2024-06-25T14:53:18.192413855Z" level=info msg="StopPodSandbox for \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\" returns successfully" Jun 25 14:53:18.193527 containerd[1520]: time="2024-06-25T14:53:18.193195979Z" level=info msg="RemovePodSandbox for \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\"" Jun 25 14:53:18.193527 containerd[1520]: time="2024-06-25T14:53:18.193306660Z" level=info msg="Forcibly stopping sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\"" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.232 [WARNING][5070] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0", GenerateName:"calico-kube-controllers-5fd654d6cc-", Namespace:"calico-system", SelfLink:"", UID:"a00cdd34-fa68-4d2a-acef-128a84544a34", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fd654d6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"3d7342240597b24dbdc5e9548fadf7035c8b0619fa23cdbfb2ba94e11af92554", Pod:"calico-kube-controllers-5fd654d6cc-fn6hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7610b94d6ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.233 [INFO][5070] k8s.go 608: Cleaning up netns ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.233 [INFO][5070] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" iface="eth0" netns="" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.233 [INFO][5070] k8s.go 615: Releasing IP address(es) ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.233 [INFO][5070] utils.go 188: Calico CNI releasing IP address ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.253 [INFO][5076] ipam_plugin.go 411: Releasing address using handleID ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.253 [INFO][5076] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.253 [INFO][5076] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.263 [WARNING][5076] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.263 [INFO][5076] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" HandleID="k8s-pod-network.f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--kube--controllers--5fd654d6cc--fn6hn-eth0" Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.265 [INFO][5076] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.268459 containerd[1520]: 2024-06-25 14:53:18.266 [INFO][5070] k8s.go 621: Teardown processing complete. ContainerID="f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5" Jun 25 14:53:18.269002 containerd[1520]: time="2024-06-25T14:53:18.268966756Z" level=info msg="TearDown network for sandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\" successfully" Jun 25 14:53:18.276774 containerd[1520]: time="2024-06-25T14:53:18.276708442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:53:18.276924 containerd[1520]: time="2024-06-25T14:53:18.276830923Z" level=info msg="RemovePodSandbox \"f37db34bcd4294c215b22e88c518544890d8bcbf822ce54c87ada32715fd17e5\" returns successfully" Jun 25 14:53:18.277837 containerd[1520]: time="2024-06-25T14:53:18.277806289Z" level=info msg="StopPodSandbox for \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\"" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.330 [WARNING][5097] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1", Pod:"csi-node-driver-rstqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d119c2d156", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.330 [INFO][5097] k8s.go 608: Cleaning up netns ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.331 [INFO][5097] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" iface="eth0" netns="" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.331 [INFO][5097] k8s.go 615: Releasing IP address(es) ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.331 [INFO][5097] utils.go 188: Calico CNI releasing IP address ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.351 [INFO][5103] ipam_plugin.go 411: Releasing address using handleID ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.351 [INFO][5103] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.351 [INFO][5103] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.359 [WARNING][5103] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.359 [INFO][5103] ipam_plugin.go 439: Releasing address using workloadID ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.361 [INFO][5103] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.364257 containerd[1520]: 2024-06-25 14:53:18.362 [INFO][5097] k8s.go 621: Teardown processing complete. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.364727 containerd[1520]: time="2024-06-25T14:53:18.364304129Z" level=info msg="TearDown network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\" successfully" Jun 25 14:53:18.364727 containerd[1520]: time="2024-06-25T14:53:18.364338210Z" level=info msg="StopPodSandbox for \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\" returns successfully" Jun 25 14:53:18.364849 containerd[1520]: time="2024-06-25T14:53:18.364800292Z" level=info msg="RemovePodSandbox for \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\"" Jun 25 14:53:18.364898 containerd[1520]: time="2024-06-25T14:53:18.364856893Z" level=info msg="Forcibly stopping sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\"" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.406 [WARNING][5122] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e7892aac-4fe7-4e98-ad8c-38ff0dbdd0b3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"13c8ccf28083d925815624465c6e20aab9f2a3c388150a5d9a734aa9596c02f1", Pod:"csi-node-driver-rstqt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.61.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8d119c2d156", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.407 [INFO][5122] k8s.go 608: Cleaning up netns ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.407 [INFO][5122] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" iface="eth0" netns="" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.407 [INFO][5122] k8s.go 615: Releasing IP address(es) ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.407 [INFO][5122] utils.go 188: Calico CNI releasing IP address ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.427 [INFO][5128] ipam_plugin.go 411: Releasing address using handleID ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.427 [INFO][5128] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.428 [INFO][5128] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.436 [WARNING][5128] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.436 [INFO][5128] ipam_plugin.go 439: Releasing address using workloadID ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" HandleID="k8s-pod-network.477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Workload="ci--3815.2.4--a--39232a46a6-k8s-csi--node--driver--rstqt-eth0" Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.438 [INFO][5128] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.441726 containerd[1520]: 2024-06-25 14:53:18.439 [INFO][5122] k8s.go 621: Teardown processing complete. ContainerID="477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22" Jun 25 14:53:18.442273 containerd[1520]: time="2024-06-25T14:53:18.442215758Z" level=info msg="TearDown network for sandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\" successfully" Jun 25 14:53:18.449671 containerd[1520]: time="2024-06-25T14:53:18.449626163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:53:18.449905 containerd[1520]: time="2024-06-25T14:53:18.449879444Z" level=info msg="RemovePodSandbox \"477e42fd96df82f9481031ddd66ad3d3ad2788912ef55d3911706ee2b5095e22\" returns successfully" Jun 25 14:53:18.450557 containerd[1520]: time="2024-06-25T14:53:18.450489608Z" level=info msg="StopPodSandbox for \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\"" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.488 [WARNING][5147] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"452cade1-fc01-42b4-8e1a-60614efcd66d", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad", Pod:"coredns-76f75df574-kptq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba7bf24efa7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.488 [INFO][5147] k8s.go 608: Cleaning up netns ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.488 [INFO][5147] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" iface="eth0" netns="" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.488 [INFO][5147] k8s.go 615: Releasing IP address(es) ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.488 [INFO][5147] utils.go 188: Calico CNI releasing IP address ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.513 [INFO][5153] ipam_plugin.go 411: Releasing address using handleID ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.514 [INFO][5153] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.514 [INFO][5153] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.522 [WARNING][5153] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.522 [INFO][5153] ipam_plugin.go 439: Releasing address using workloadID ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.524 [INFO][5153] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.527352 containerd[1520]: 2024-06-25 14:53:18.525 [INFO][5147] k8s.go 621: Teardown processing complete. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.527933 containerd[1520]: time="2024-06-25T14:53:18.527884394Z" level=info msg="TearDown network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\" successfully" Jun 25 14:53:18.528016 containerd[1520]: time="2024-06-25T14:53:18.527997194Z" level=info msg="StopPodSandbox for \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\" returns successfully" Jun 25 14:53:18.528652 containerd[1520]: time="2024-06-25T14:53:18.528620078Z" level=info msg="RemovePodSandbox for \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\"" Jun 25 14:53:18.528753 containerd[1520]: time="2024-06-25T14:53:18.528674119Z" level=info msg="Forcibly stopping sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\"" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.566 [WARNING][5172] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"452cade1-fc01-42b4-8e1a-60614efcd66d", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"9692c9c90359dbdaac35f24dd34ec3bb4f2be818a2b5152771d0a5c6115814ad", Pod:"coredns-76f75df574-kptq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba7bf24efa7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.566 [INFO][5172] k8s.go 608: Cleaning up netns ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.566 [INFO][5172] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" iface="eth0" netns="" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.566 [INFO][5172] k8s.go 615: Releasing IP address(es) ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.566 [INFO][5172] utils.go 188: Calico CNI releasing IP address ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.589 [INFO][5178] ipam_plugin.go 411: Releasing address using handleID ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.589 [INFO][5178] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.590 [INFO][5178] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.598 [WARNING][5178] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.598 [INFO][5178] ipam_plugin.go 439: Releasing address using workloadID ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" HandleID="k8s-pod-network.83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Workload="ci--3815.2.4--a--39232a46a6-k8s-coredns--76f75df574--kptq5-eth0" Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.603 [INFO][5178] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:18.607444 containerd[1520]: 2024-06-25 14:53:18.605 [INFO][5172] k8s.go 621: Teardown processing complete. ContainerID="83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48" Jun 25 14:53:18.607912 containerd[1520]: time="2024-06-25T14:53:18.607487713Z" level=info msg="TearDown network for sandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\" successfully" Jun 25 14:53:18.629205 containerd[1520]: time="2024-06-25T14:53:18.629151243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:53:18.629483 containerd[1520]: time="2024-06-25T14:53:18.629457165Z" level=info msg="RemovePodSandbox \"83918caca5ff40fdff71045a7260796dcaad57997ec664fce8365db8d9192f48\" returns successfully" Jun 25 14:53:19.835000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:19.841010 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 14:53:19.841144 kernel: audit: type=1400 audit(1719327199.835:566): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:19.835000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40010f31a0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:19.888164 kernel: audit: type=1300 audit(1719327199.835:566): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40010f31a0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:19.835000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:19.911423 kernel: audit: type=1327 audit(1719327199.835:566): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:19.865000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:19.932716 kernel: audit: type=1400 audit(1719327199.865:567): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:19.865000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40010f31c0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:19.959564 kernel: audit: type=1300 audit(1719327199.865:567): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40010f31c0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:19.865000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:19.982764 kernel: audit: type=1327 audit(1719327199.865:567): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:19.870000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:20.003724 kernel: audit: type=1400 audit(1719327199.870:568): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:19.870000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000bae4c0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:20.030524 kernel: audit: type=1300 audit(1719327199.870:568): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000bae4c0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:19.870000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:20.054080 kernel: audit: type=1327 audit(1719327199.870:568): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:19.871000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:20.077214 kernel: audit: type=1400 audit(1719327199.871:569): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:53:19.871000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000bae4e0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:53:19.871000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:53:20.772515 systemd[1]: run-containerd-runc-k8s.io-01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461-runc.fF3n9H.mount: Deactivated successfully. Jun 25 14:53:28.772009 systemd[1]: run-containerd-runc-k8s.io-caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144-runc.E0QwkJ.mount: Deactivated successfully. Jun 25 14:53:32.038910 kubelet[2883]: I0625 14:53:32.038797 2883 topology_manager.go:215] "Topology Admit Handler" podUID="3d718195-671b-4850-9e65-d54754cd3de8" podNamespace="calico-apiserver" podName="calico-apiserver-686c79cb45-7pxmj" Jun 25 14:53:32.045624 systemd[1]: Created slice kubepods-besteffort-pod3d718195_671b_4850_9e65_d54754cd3de8.slice - libcontainer container kubepods-besteffort-pod3d718195_671b_4850_9e65_d54754cd3de8.slice. Jun 25 14:53:32.053481 kubelet[2883]: W0625 14:53:32.053440 2883 reflector.go:539] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3815.2.4-a-39232a46a6" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3815.2.4-a-39232a46a6' and this object Jun 25 14:53:32.053711 kubelet[2883]: E0625 14:53:32.053697 2883 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3815.2.4-a-39232a46a6" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3815.2.4-a-39232a46a6' and this object Jun 25 14:53:32.068000 audit[5243]: NETFILTER_CFG table=filter:116 family=2 entries=9 op=nft_register_rule pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.074170 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 14:53:32.074341 kernel: audit: type=1325 audit(1719327212.068:570): table=filter:116 family=2 entries=9 op=nft_register_rule pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.088815 kubelet[2883]: I0625 14:53:32.088765 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29z8\" (UniqueName: \"kubernetes.io/projected/3d718195-671b-4850-9e65-d54754cd3de8-kube-api-access-h29z8\") pod \"calico-apiserver-686c79cb45-7pxmj\" (UID: \"3d718195-671b-4850-9e65-d54754cd3de8\") " pod="calico-apiserver/calico-apiserver-686c79cb45-7pxmj" Jun 25 14:53:32.089065 kubelet[2883]: I0625 14:53:32.089052 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3d718195-671b-4850-9e65-d54754cd3de8-calico-apiserver-certs\") pod \"calico-apiserver-686c79cb45-7pxmj\" (UID: \"3d718195-671b-4850-9e65-d54754cd3de8\") " pod="calico-apiserver/calico-apiserver-686c79cb45-7pxmj" Jun 25 14:53:32.090334 kubelet[2883]: I0625 14:53:32.090279 2883 topology_manager.go:215] "Topology Admit Handler" podUID="2b01f70a-3899-4185-81d7-69456be568e3" podNamespace="calico-apiserver" podName="calico-apiserver-686c79cb45-85rk2" Jun 25 14:53:32.096676 systemd[1]: Created slice kubepods-besteffort-pod2b01f70a_3899_4185_81d7_69456be568e3.slice - libcontainer container kubepods-besteffort-pod2b01f70a_3899_4185_81d7_69456be568e3.slice. Jun 25 14:53:32.068000 audit[5243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd4c3e430 a2=0 a3=1 items=0 ppid=3023 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:32.127967 kernel: audit: type=1300 audit(1719327212.068:570): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd4c3e430 a2=0 a3=1 items=0 ppid=3023 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:32.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:32.152380 kernel: audit: type=1327 audit(1719327212.068:570): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:32.070000 audit[5243]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.167349 kernel: audit: type=1325 audit(1719327212.070:571): table=nat:117 family=2 entries=20 op=nft_register_rule pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.070000 audit[5243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd4c3e430 a2=0 a3=1 items=0 ppid=3023 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:32.194003 kernel: audit: type=1300 audit(1719327212.070:571): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd4c3e430 a2=0 a3=1 items=0 ppid=3023 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:32.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:32.209199 kernel: audit: type=1327 audit(1719327212.070:571): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:32.207000 audit[5247]: NETFILTER_CFG table=filter:118 family=2 entries=10 op=nft_register_rule pid=5247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.224334 kernel: audit: type=1325 audit(1719327212.207:572): table=filter:118 family=2 entries=10 op=nft_register_rule pid=5247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.207000 audit[5247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe394f820 a2=0 a3=1 items=0 ppid=3023 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:32.250149 kernel: audit: type=1300 audit(1719327212.207:572): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe394f820 a2=0 a3=1 items=0 ppid=3023 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:32.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:32.265375 kernel: audit: type=1327 audit(1719327212.207:572): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:32.213000 audit[5247]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.284883 kernel: audit: type=1325 audit(1719327212.213:573): table=nat:119 family=2 entries=20 op=nft_register_rule pid=5247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:32.213000 audit[5247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe394f820 a2=0 a3=1 items=0 ppid=3023 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:32.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:32.297535 kubelet[2883]: I0625 14:53:32.297410 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b01f70a-3899-4185-81d7-69456be568e3-calico-apiserver-certs\") pod \"calico-apiserver-686c79cb45-85rk2\" (UID: \"2b01f70a-3899-4185-81d7-69456be568e3\") " pod="calico-apiserver/calico-apiserver-686c79cb45-85rk2" Jun 25 14:53:32.297726 kubelet[2883]: I0625 14:53:32.297713 2883 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7mr\" (UniqueName: \"kubernetes.io/projected/2b01f70a-3899-4185-81d7-69456be568e3-kube-api-access-bv7mr\") pod \"calico-apiserver-686c79cb45-85rk2\" (UID: \"2b01f70a-3899-4185-81d7-69456be568e3\") " pod="calico-apiserver/calico-apiserver-686c79cb45-85rk2" Jun 25 14:53:33.197846 kubelet[2883]: E0625 14:53:33.197793 2883 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jun 25 14:53:33.198317 kubelet[2883]: E0625 14:53:33.197927 2883 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d718195-671b-4850-9e65-d54754cd3de8-calico-apiserver-certs podName:3d718195-671b-4850-9e65-d54754cd3de8 nodeName:}" failed. No retries permitted until 2024-06-25 14:53:33.697896128 +0000 UTC m=+75.880544363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3d718195-671b-4850-9e65-d54754cd3de8-calico-apiserver-certs") pod "calico-apiserver-686c79cb45-7pxmj" (UID: "3d718195-671b-4850-9e65-d54754cd3de8") : failed to sync secret cache: timed out waiting for the condition Jun 25 14:53:33.399561 kubelet[2883]: E0625 14:53:33.399513 2883 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jun 25 14:53:33.399738 kubelet[2883]: E0625 14:53:33.399623 2883 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b01f70a-3899-4185-81d7-69456be568e3-calico-apiserver-certs podName:2b01f70a-3899-4185-81d7-69456be568e3 nodeName:}" failed. No retries permitted until 2024-06-25 14:53:33.899599824 +0000 UTC m=+76.082248059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2b01f70a-3899-4185-81d7-69456be568e3-calico-apiserver-certs") pod "calico-apiserver-686c79cb45-85rk2" (UID: "2b01f70a-3899-4185-81d7-69456be568e3") : failed to sync secret cache: timed out waiting for the condition Jun 25 14:53:33.849024 containerd[1520]: time="2024-06-25T14:53:33.848951345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686c79cb45-7pxmj,Uid:3d718195-671b-4850-9e65-d54754cd3de8,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:53:34.019922 systemd-networkd[1257]: cali64e32132ac6: Link UP Jun 25 14:53:34.032762 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:53:34.032930 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali64e32132ac6: link becomes ready Jun 25 14:53:34.033545 systemd-networkd[1257]: cali64e32132ac6: Gained carrier Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.927 [INFO][5256] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0 calico-apiserver-686c79cb45- calico-apiserver 3d718195-671b-4850-9e65-d54754cd3de8 877 0 2024-06-25 14:53:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:686c79cb45 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-39232a46a6 calico-apiserver-686c79cb45-7pxmj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64e32132ac6 [] []}} ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.927 [INFO][5256] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.965 [INFO][5268] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" HandleID="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.977 [INFO][5268] ipam_plugin.go 264: Auto assigning IP ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" HandleID="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263ca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-39232a46a6", "pod":"calico-apiserver-686c79cb45-7pxmj", "timestamp":"2024-06-25 14:53:33.965425138 +0000 UTC"}, Hostname:"ci-3815.2.4-a-39232a46a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.978 [INFO][5268] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.978 [INFO][5268] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.978 [INFO][5268] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-39232a46a6' Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.980 [INFO][5268] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.988 [INFO][5268] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.994 [INFO][5268] ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:33.997 [INFO][5268] ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.000 [INFO][5268] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.000 [INFO][5268] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.001 [INFO][5268] ipam.go 1685: Creating new handle: k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4 Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.006 [INFO][5268] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.014 [INFO][5268] ipam.go 1216: Successfully claimed IPs: [192.168.61.5/26] block=192.168.61.0/26 handle="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.015 [INFO][5268] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.5/26] handle="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.015 [INFO][5268] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:34.051769 containerd[1520]: 2024-06-25 14:53:34.015 [INFO][5268] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.61.5/26] IPv6=[] ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" HandleID="k8s-pod-network.e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" Jun 25 14:53:34.052557 containerd[1520]: 2024-06-25 14:53:34.016 [INFO][5256] k8s.go 386: Populated endpoint ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0", GenerateName:"calico-apiserver-686c79cb45-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d718195-671b-4850-9e65-d54754cd3de8", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686c79cb45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"", Pod:"calico-apiserver-686c79cb45-7pxmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64e32132ac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:34.052557 containerd[1520]: 2024-06-25 14:53:34.017 [INFO][5256] k8s.go 387: Calico CNI using IPs: [192.168.61.5/32] ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" Jun 25 14:53:34.052557 containerd[1520]: 2024-06-25 14:53:34.017 [INFO][5256] dataplane_linux.go 68: Setting the host side veth name to cali64e32132ac6 ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" Jun 25 14:53:34.052557 containerd[1520]: 2024-06-25 14:53:34.034 [INFO][5256] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" Jun 25 14:53:34.052557 containerd[1520]: 2024-06-25 14:53:34.035 [INFO][5256] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0", GenerateName:"calico-apiserver-686c79cb45-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d718195-671b-4850-9e65-d54754cd3de8", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686c79cb45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4", Pod:"calico-apiserver-686c79cb45-7pxmj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64e32132ac6", MAC:"32:9a:54:6f:24:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:34.052557 containerd[1520]: 2024-06-25 14:53:34.049 [INFO][5256] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-7pxmj" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--7pxmj-eth0" Jun 25 14:53:34.076000 audit[5300]: NETFILTER_CFG table=filter:120 family=2 entries=55 op=nft_register_chain pid=5300 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:34.076000 audit[5300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27464 a0=3 a1=ffffcbacf100 a2=0 a3=ffff81deffa8 items=0 ppid=4190 pid=5300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.076000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:34.079106 containerd[1520]: time="2024-06-25T14:53:34.078993032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:53:34.079390 containerd[1520]: time="2024-06-25T14:53:34.079330394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:34.079452 containerd[1520]: time="2024-06-25T14:53:34.079387955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:53:34.079452 containerd[1520]: time="2024-06-25T14:53:34.079415275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:34.104457 systemd[1]: Started cri-containerd-e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4.scope - libcontainer container e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4. Jun 25 14:53:34.115000 audit: BPF prog-id=203 op=LOAD Jun 25 14:53:34.115000 audit: BPF prog-id=204 op=LOAD Jun 25 14:53:34.115000 audit[5312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=5299 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537653939336466633465643761313639336435316264623136376461 Jun 25 14:53:34.115000 audit: BPF prog-id=205 op=LOAD Jun 25 14:53:34.115000 audit[5312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=5299 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537653939336466633465643761313639336435316264623136376461 Jun 25 14:53:34.115000 audit: BPF prog-id=205 op=UNLOAD Jun 25 14:53:34.115000 audit: BPF prog-id=204 op=UNLOAD Jun 25 14:53:34.115000 audit: BPF prog-id=206 op=LOAD Jun 25 14:53:34.115000 audit[5312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=5299 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537653939336466633465643761313639336435316264623136376461 Jun 25 14:53:34.136809 containerd[1520]: time="2024-06-25T14:53:34.136753025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686c79cb45-7pxmj,Uid:3d718195-671b-4850-9e65-d54754cd3de8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4\"" Jun 25 14:53:34.140034 containerd[1520]: time="2024-06-25T14:53:34.139474039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:53:34.200823 containerd[1520]: time="2024-06-25T14:53:34.200778771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686c79cb45-85rk2,Uid:2b01f70a-3899-4185-81d7-69456be568e3,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:53:34.359489 systemd-networkd[1257]: caliacdcf38385d: Link UP Jun 25 14:53:34.367624 systemd-networkd[1257]: caliacdcf38385d: Gained carrier Jun 25 14:53:34.368310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliacdcf38385d: link becomes ready Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.276 [INFO][5334] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0 calico-apiserver-686c79cb45- calico-apiserver 2b01f70a-3899-4185-81d7-69456be568e3 879 0 2024-06-25 14:53:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:686c79cb45 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-39232a46a6 calico-apiserver-686c79cb45-85rk2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliacdcf38385d [] []}} ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.276 [INFO][5334] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.312 [INFO][5345] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" HandleID="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.326 [INFO][5345] ipam_plugin.go 264: Auto assigning IP ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" HandleID="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307350), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-39232a46a6", "pod":"calico-apiserver-686c79cb45-85rk2", "timestamp":"2024-06-25 14:53:34.312593695 +0000 UTC"}, Hostname:"ci-3815.2.4-a-39232a46a6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.327 [INFO][5345] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.327 [INFO][5345] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.327 [INFO][5345] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-39232a46a6' Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.329 [INFO][5345] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.334 [INFO][5345] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.339 [INFO][5345] ipam.go 489: Trying affinity for 192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.341 [INFO][5345] ipam.go 155: Attempting to load block cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.344 [INFO][5345] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.61.0/26 host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.344 [INFO][5345] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.61.0/26 handle="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.345 [INFO][5345] ipam.go 1685: Creating new handle: k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11 Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.349 [INFO][5345] ipam.go 1203: Writing block in order to claim IPs block=192.168.61.0/26 handle="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.354 [INFO][5345] ipam.go 1216: Successfully claimed IPs: [192.168.61.6/26] block=192.168.61.0/26 handle="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.354 [INFO][5345] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.61.6/26] handle="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" host="ci-3815.2.4-a-39232a46a6" Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.354 [INFO][5345] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:53:34.388020 containerd[1520]: 2024-06-25 14:53:34.354 [INFO][5345] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.61.6/26] IPv6=[] ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" HandleID="k8s-pod-network.10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Workload="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" Jun 25 14:53:34.388690 containerd[1520]: 2024-06-25 14:53:34.356 [INFO][5334] k8s.go 386: Populated endpoint ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0", GenerateName:"calico-apiserver-686c79cb45-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b01f70a-3899-4185-81d7-69456be568e3", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686c79cb45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"", Pod:"calico-apiserver-686c79cb45-85rk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacdcf38385d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:34.388690 containerd[1520]: 2024-06-25 14:53:34.356 [INFO][5334] k8s.go 387: Calico CNI using IPs: [192.168.61.6/32] ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" Jun 25 14:53:34.388690 containerd[1520]: 2024-06-25 14:53:34.356 [INFO][5334] dataplane_linux.go 68: Setting the host side veth name to caliacdcf38385d ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" Jun 25 14:53:34.388690 containerd[1520]: 2024-06-25 14:53:34.359 [INFO][5334] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" Jun 25 14:53:34.388690 containerd[1520]: 2024-06-25 14:53:34.360 [INFO][5334] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0", GenerateName:"calico-apiserver-686c79cb45-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b01f70a-3899-4185-81d7-69456be568e3", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 53, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"686c79cb45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-39232a46a6", ContainerID:"10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11", Pod:"calico-apiserver-686c79cb45-85rk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacdcf38385d", MAC:"0a:18:c7:40:e5:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:53:34.388690 containerd[1520]: 2024-06-25 14:53:34.382 [INFO][5334] k8s.go 500: Wrote updated endpoint to datastore ContainerID="10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11" Namespace="calico-apiserver" Pod="calico-apiserver-686c79cb45-85rk2" WorkloadEndpoint="ci--3815.2.4--a--39232a46a6-k8s-calico--apiserver--686c79cb45--85rk2-eth0" Jun 25 14:53:34.406000 audit[5364]: NETFILTER_CFG table=filter:121 family=2 entries=49 op=nft_register_chain pid=5364 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:53:34.406000 audit[5364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24300 a0=3 a1=fffffaf3f3c0 a2=0 a3=ffffa5692fa8 items=0 ppid=4190 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.406000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:53:34.415725 containerd[1520]: time="2024-06-25T14:53:34.415470411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:53:34.415725 containerd[1520]: time="2024-06-25T14:53:34.415536211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:34.415725 containerd[1520]: time="2024-06-25T14:53:34.415556571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:53:34.415725 containerd[1520]: time="2024-06-25T14:53:34.415571211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:53:34.432492 systemd[1]: Started cri-containerd-10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11.scope - libcontainer container 10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11. Jun 25 14:53:34.445000 audit: BPF prog-id=207 op=LOAD Jun 25 14:53:34.446000 audit: BPF prog-id=208 op=LOAD Jun 25 14:53:34.446000 audit[5383]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001338b0 a2=78 a3=0 items=0 ppid=5372 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613535363266646632646536393266393537663362386165343731 Jun 25 14:53:34.446000 audit: BPF prog-id=209 op=LOAD Jun 25 14:53:34.446000 audit[5383]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000133640 a2=78 a3=0 items=0 ppid=5372 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613535363266646632646536393266393537663362386165343731 Jun 25 14:53:34.447000 audit: BPF prog-id=209 op=UNLOAD Jun 25 14:53:34.447000 audit: BPF prog-id=208 op=UNLOAD Jun 25 14:53:34.447000 audit: BPF prog-id=210 op=LOAD Jun 25 14:53:34.447000 audit[5383]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000133b10 a2=78 a3=0 items=0 ppid=5372 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:34.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130613535363266646632646536393266393537663362386165343731 Jun 25 14:53:34.471379 containerd[1520]: time="2024-06-25T14:53:34.471305512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-686c79cb45-85rk2,Uid:2b01f70a-3899-4185-81d7-69456be568e3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11\"" Jun 25 14:53:35.114519 systemd-networkd[1257]: cali64e32132ac6: Gained IPv6LL Jun 25 14:53:36.122665 containerd[1520]: time="2024-06-25T14:53:36.122618759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.125673 containerd[1520]: time="2024-06-25T14:53:36.125623375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:53:36.132359 containerd[1520]: time="2024-06-25T14:53:36.131488006Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.135446 containerd[1520]: time="2024-06-25T14:53:36.135400507Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.139460 containerd[1520]: time="2024-06-25T14:53:36.139415729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.140196 containerd[1520]: time="2024-06-25T14:53:36.140118212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.000592932s" Jun 25 14:53:36.140196 containerd[1520]: time="2024-06-25T14:53:36.140168453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:53:36.142688 containerd[1520]: time="2024-06-25T14:53:36.141877502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:53:36.145965 containerd[1520]: time="2024-06-25T14:53:36.145920123Z" level=info msg="CreateContainer within sandbox \"e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:53:36.176490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803362133.mount: Deactivated successfully. Jun 25 14:53:36.201033 containerd[1520]: time="2024-06-25T14:53:36.200972698Z" level=info msg="CreateContainer within sandbox \"e7e993dfc4ed7a1693d51bdb167da9f816b143b0a3d55973c2be5172dc13b3a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"85b91cc0026a1d6c9d0b4d732a0460e65bec117f441df052d6e3fff08c7f7b4e\"" Jun 25 14:53:36.201764 containerd[1520]: time="2024-06-25T14:53:36.201730342Z" level=info msg="StartContainer for \"85b91cc0026a1d6c9d0b4d732a0460e65bec117f441df052d6e3fff08c7f7b4e\"" Jun 25 14:53:36.255969 systemd[1]: run-containerd-runc-k8s.io-85b91cc0026a1d6c9d0b4d732a0460e65bec117f441df052d6e3fff08c7f7b4e-runc.ZPWCxU.mount: Deactivated successfully. Jun 25 14:53:36.264484 systemd[1]: Started cri-containerd-85b91cc0026a1d6c9d0b4d732a0460e65bec117f441df052d6e3fff08c7f7b4e.scope - libcontainer container 85b91cc0026a1d6c9d0b4d732a0460e65bec117f441df052d6e3fff08c7f7b4e. Jun 25 14:53:36.280000 audit: BPF prog-id=211 op=LOAD Jun 25 14:53:36.280000 audit: BPF prog-id=212 op=LOAD Jun 25 14:53:36.280000 audit[5425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=5299 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623931636330303236613164366339643062346437333261303436 Jun 25 14:53:36.280000 audit: BPF prog-id=213 op=LOAD Jun 25 14:53:36.280000 audit[5425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=5299 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623931636330303236613164366339643062346437333261303436 Jun 25 14:53:36.280000 audit: BPF prog-id=213 op=UNLOAD Jun 25 14:53:36.280000 audit: BPF prog-id=212 op=UNLOAD Jun 25 14:53:36.280000 audit: BPF prog-id=214 op=LOAD Jun 25 14:53:36.280000 audit[5425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=5299 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835623931636330303236613164366339643062346437333261303436 Jun 25 14:53:36.324280 containerd[1520]: time="2024-06-25T14:53:36.324204317Z" level=info msg="StartContainer for \"85b91cc0026a1d6c9d0b4d732a0460e65bec117f441df052d6e3fff08c7f7b4e\" returns successfully" Jun 25 14:53:36.331425 systemd-networkd[1257]: caliacdcf38385d: Gained IPv6LL Jun 25 14:53:36.433000 audit[5450]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=5450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:36.433000 audit[5450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff4b62030 a2=0 a3=1 items=0 ppid=3023 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:36.438000 audit[5450]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=5450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:36.438000 audit[5450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff4b62030 a2=0 a3=1 items=0 ppid=3023 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.438000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:36.459469 containerd[1520]: time="2024-06-25T14:53:36.459405719Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.461736 containerd[1520]: time="2024-06-25T14:53:36.461662171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 14:53:36.467111 containerd[1520]: time="2024-06-25T14:53:36.467049880Z" level=info msg="ImageUpdate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.473697 containerd[1520]: time="2024-06-25T14:53:36.473637315Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.480720 containerd[1520]: time="2024-06-25T14:53:36.480667913Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:53:36.482166 containerd[1520]: time="2024-06-25T14:53:36.482125761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 340.183339ms" Jun 25 14:53:36.482284 containerd[1520]: time="2024-06-25T14:53:36.482178081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:53:36.484182 containerd[1520]: time="2024-06-25T14:53:36.484142452Z" level=info msg="CreateContainer within sandbox \"10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:53:36.533417 containerd[1520]: time="2024-06-25T14:53:36.533351995Z" level=info msg="CreateContainer within sandbox \"10a5562fdf2de692f957f3b8ae471e11afd725b137a1611af4eb6f151d6e6d11\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a1286efc7988d2b85f1adf0a8337bd5beb0bb381af73b5b8daa5f1fd91da666\"" Jun 25 14:53:36.534335 containerd[1520]: time="2024-06-25T14:53:36.534291280Z" level=info msg="StartContainer for \"7a1286efc7988d2b85f1adf0a8337bd5beb0bb381af73b5b8daa5f1fd91da666\"" Jun 25 14:53:36.571466 systemd[1]: Started cri-containerd-7a1286efc7988d2b85f1adf0a8337bd5beb0bb381af73b5b8daa5f1fd91da666.scope - libcontainer container 7a1286efc7988d2b85f1adf0a8337bd5beb0bb381af73b5b8daa5f1fd91da666. Jun 25 14:53:36.592000 audit: BPF prog-id=215 op=LOAD Jun 25 14:53:36.592000 audit: BPF prog-id=216 op=LOAD Jun 25 14:53:36.592000 audit[5463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=5372 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761313238366566633739383864326238356631616466306138333337 Jun 25 14:53:36.593000 audit: BPF prog-id=217 op=LOAD Jun 25 14:53:36.593000 audit[5463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=5372 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761313238366566633739383864326238356631616466306138333337 Jun 25 14:53:36.593000 audit: BPF prog-id=217 op=UNLOAD Jun 25 14:53:36.593000 audit: BPF prog-id=216 op=UNLOAD Jun 25 14:53:36.593000 audit: BPF prog-id=218 op=LOAD Jun 25 14:53:36.593000 audit[5463]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=5372 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:36.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761313238366566633739383864326238356631616466306138333337 Jun 25 14:53:36.625463 containerd[1520]: time="2024-06-25T14:53:36.625400287Z" level=info msg="StartContainer for \"7a1286efc7988d2b85f1adf0a8337bd5beb0bb381af73b5b8daa5f1fd91da666\" returns successfully" Jun 25 14:53:37.345245 kubelet[2883]: I0625 14:53:37.345083 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-686c79cb45-7pxmj" podStartSLOduration=3.343273226 podStartE2EDuration="5.345034845s" podCreationTimestamp="2024-06-25 14:53:32 +0000 UTC" firstStartedPulling="2024-06-25 14:53:34.138782396 +0000 UTC m=+76.321430631" lastFinishedPulling="2024-06-25 14:53:36.140544015 +0000 UTC m=+78.323192250" observedRunningTime="2024-06-25 14:53:36.355279083 +0000 UTC m=+78.537927318" watchObservedRunningTime="2024-06-25 14:53:37.345034845 +0000 UTC m=+79.527683080" Jun 25 14:53:37.365000 audit[5492]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.370169 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 14:53:37.370358 kernel: audit: type=1325 audit(1719327217.365:602): table=filter:124 family=2 entries=10 op=nft_register_rule pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.365000 audit[5492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc25a24d0 a2=0 a3=1 items=0 ppid=3023 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:37.410737 kernel: audit: type=1300 audit(1719327217.365:602): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc25a24d0 a2=0 a3=1 items=0 ppid=3023 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:37.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:37.426430 kernel: audit: type=1327 audit(1719327217.365:602): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:37.413000 audit[5492]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.439874 kernel: audit: type=1325 audit(1719327217.413:603): table=nat:125 family=2 entries=20 op=nft_register_rule pid=5492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.413000 audit[5492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc25a24d0 a2=0 a3=1 items=0 ppid=3023 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:37.466105 kernel: audit: type=1300 audit(1719327217.413:603): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc25a24d0 a2=0 a3=1 items=0 ppid=3023 pid=5492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:37.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:37.479522 kernel: audit: type=1327 audit(1719327217.413:603): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:37.604441 kubelet[2883]: I0625 14:53:37.604321 2883 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-686c79cb45-85rk2" podStartSLOduration=3.594798822 podStartE2EDuration="5.604266823s" podCreationTimestamp="2024-06-25 14:53:32 +0000 UTC" firstStartedPulling="2024-06-25 14:53:34.473039402 +0000 UTC m=+76.655687637" lastFinishedPulling="2024-06-25 14:53:36.482507403 +0000 UTC m=+78.665155638" observedRunningTime="2024-06-25 14:53:37.345402847 +0000 UTC m=+79.528051042" watchObservedRunningTime="2024-06-25 14:53:37.604266823 +0000 UTC m=+79.786915058" Jun 25 14:53:37.642000 audit[5494]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.642000 audit[5494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffcd4867b0 a2=0 a3=1 items=0 ppid=3023 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:37.681321 kernel: audit: type=1325 audit(1719327217.642:604): table=filter:126 family=2 entries=9 op=nft_register_rule pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.681436 kernel: audit: type=1300 audit(1719327217.642:604): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffcd4867b0 a2=0 a3=1 items=0 ppid=3023 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:37.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:37.694668 kernel: audit: type=1327 audit(1719327217.642:604): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:37.694000 audit[5494]: NETFILTER_CFG table=nat:127 family=2 entries=27 op=nft_register_chain pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.707593 kernel: audit: type=1325 audit(1719327217.694:605): table=nat:127 family=2 entries=27 op=nft_register_chain pid=5494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:37.694000 audit[5494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffcd4867b0 a2=0 a3=1 items=0 ppid=3023 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:37.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:38.717000 audit[5497]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=5497 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:38.717000 audit[5497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffffad2cf70 a2=0 a3=1 items=0 ppid=3023 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:38.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:38.721000 audit[5497]: NETFILTER_CFG table=nat:129 family=2 entries=34 op=nft_register_chain pid=5497 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:53:38.721000 audit[5497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=fffffad2cf70 a2=0 a3=1 items=0 ppid=3023 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:53:38.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:53:41.235552 systemd[1]: run-containerd-runc-k8s.io-01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461-runc.fygXxf.mount: Deactivated successfully. Jun 25 14:53:58.773719 systemd[1]: run-containerd-runc-k8s.io-caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144-runc.Ya0g2N.mount: Deactivated successfully. Jun 25 14:54:14.407000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.411693 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 14:54:14.411825 kernel: audit: type=1400 audit(1719327254.407:608): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.407000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=4007daa7e0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.456830 kernel: audit: type=1300 audit(1719327254.407:608): arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=4007daa7e0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.407000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.479501 kernel: audit: type=1327 audit(1719327254.407:608): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.426000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.505238 kernel: audit: type=1400 audit(1719327254.426:609): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.426000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=4007daa840 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.536309 kernel: audit: type=1300 audit(1719327254.426:609): arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=4007daa840 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.426000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.560521 kernel: audit: type=1327 audit(1719327254.426:609): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.441000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.581814 kernel: audit: type=1400 audit(1719327254.441:610): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.441000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=40077f7220 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.608208 kernel: audit: type=1300 audit(1719327254.441:610): arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=40077f7220 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.441000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.630882 kernel: audit: type=1327 audit(1719327254.441:610): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.517000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.651845 kernel: audit: type=1400 audit(1719327254.517:611): avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.517000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=4007dabf80 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.517000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.555000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.555000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001efed80 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:14.555000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:14.555000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.555000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000e5c960 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:14.555000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:14.574000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.574000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=40081068a0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.574000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:14.574000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:14.574000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6c a1=4007e24460 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:54:14.574000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:54:15.551668 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:37116.service - OpenSSH per-connection server daemon (10.200.16.10:37116). Jun 25 14:54:15.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:37116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:16.001000 audit[5599]: USER_ACCT pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:16.003000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:16.003000 audit[5599]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcfcc590 a2=3 a3=1 items=0 ppid=1 pid=5599 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:16.003000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:16.005623 sshd[5599]: Accepted publickey for core from 10.200.16.10 port 37116 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:16.006063 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:16.010760 systemd-logind[1480]: New session 10 of user core. Jun 25 14:54:16.014439 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:54:16.017000 audit[5599]: USER_START pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:16.019000 audit[5606]: CRED_ACQ pid=5606 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:16.470055 sshd[5599]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:16.469000 audit[5599]: USER_END pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:16.470000 audit[5599]: CRED_DISP pid=5599 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:16.474018 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:54:16.474427 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:54:16.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.36:22-10.200.16.10:37116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:16.475323 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:37116.service: Deactivated successfully. Jun 25 14:54:16.476602 systemd-logind[1480]: Removed session 10. Jun 25 14:54:19.837000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:19.842825 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 14:54:19.842922 kernel: audit: type=1400 audit(1719327259.837:625): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:19.837000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000516820 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:19.888556 kernel: audit: type=1300 audit(1719327259.837:625): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000516820 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:19.837000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:19.911715 kernel: audit: type=1327 audit(1719327259.837:625): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:19.866000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:19.932202 kernel: audit: type=1400 audit(1719327259.866:626): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:19.866000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40004d6fc0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:19.958628 kernel: audit: type=1300 audit(1719327259.866:626): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40004d6fc0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:19.866000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:19.981554 kernel: audit: type=1327 audit(1719327259.866:626): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:19.872000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:20.002000 kernel: audit: type=1400 audit(1719327259.872:627): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:19.872000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40005168a0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:20.027958 kernel: audit: type=1300 audit(1719327259.872:627): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40005168a0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:19.872000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:20.050857 kernel: audit: type=1327 audit(1719327259.872:627): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:19.872000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:20.071312 kernel: audit: type=1400 audit(1719327259.872:628): avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:54:19.872000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000516920 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:54:19.872000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:54:20.773636 systemd[1]: run-containerd-runc-k8s.io-01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461-runc.mgDaHd.mount: Deactivated successfully. Jun 25 14:54:21.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:37122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:21.570933 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:37122.service - OpenSSH per-connection server daemon (10.200.16.10:37122). Jun 25 14:54:22.022000 audit[5638]: USER_ACCT pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:22.023736 sshd[5638]: Accepted publickey for core from 10.200.16.10 port 37122 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:22.023000 audit[5638]: CRED_ACQ pid=5638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:22.023000 audit[5638]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff724eec0 a2=3 a3=1 items=0 ppid=1 pid=5638 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:22.023000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:22.025413 sshd[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:22.030150 systemd-logind[1480]: New session 11 of user core. Jun 25 14:54:22.036501 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:54:22.040000 audit[5638]: USER_START pid=5638 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:22.041000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:22.434455 sshd[5638]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:22.434000 audit[5638]: USER_END pid=5638 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:22.434000 audit[5638]: CRED_DISP pid=5638 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:22.437627 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:54:22.437791 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:54:22.438616 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:37122.service: Deactivated successfully. Jun 25 14:54:22.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.36:22-10.200.16.10:37122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:22.439743 systemd-logind[1480]: Removed session 11. Jun 25 14:54:27.520405 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:41782.service - OpenSSH per-connection server daemon (10.200.16.10:41782). Jun 25 14:54:27.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:41782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:27.525732 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 14:54:27.525782 kernel: audit: type=1130 audit(1719327267.520:638): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:41782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:27.986000 audit[5662]: USER_ACCT pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:27.988793 sshd[5662]: Accepted publickey for core from 10.200.16.10 port 41782 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:27.989657 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:27.986000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.013926 systemd-logind[1480]: New session 12 of user core. Jun 25 14:54:28.042932 kernel: audit: type=1101 audit(1719327267.986:639): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.042980 kernel: audit: type=1103 audit(1719327267.986:640): pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.043007 kernel: audit: type=1006 audit(1719327267.986:641): pid=5662 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 14:54:28.043025 kernel: audit: type=1300 audit(1719327267.986:641): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0fcee20 a2=3 a3=1 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:27.986000 audit[5662]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0fcee20 a2=3 a3=1 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:28.042568 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:54:27.986000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:28.071275 kernel: audit: type=1327 audit(1719327267.986:641): proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:28.048000 audit[5662]: USER_START pid=5662 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.095554 kernel: audit: type=1105 audit(1719327268.048:642): pid=5662 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.048000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.115263 kernel: audit: type=1103 audit(1719327268.048:643): pid=5664 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.385071 sshd[5662]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:28.386000 audit[5662]: USER_END pid=5662 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.388393 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:54:28.388965 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:41782.service: Deactivated successfully. Jun 25 14:54:28.390350 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:54:28.391211 systemd-logind[1480]: Removed session 12. Jun 25 14:54:28.386000 audit[5662]: CRED_DISP pid=5662 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.428453 kernel: audit: type=1106 audit(1719327268.386:644): pid=5662 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.428560 kernel: audit: type=1104 audit(1719327268.386:645): pid=5662 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:28.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.36:22-10.200.16.10:41782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:33.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:41796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:33.480634 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:41796.service - OpenSSH per-connection server daemon (10.200.16.10:41796). Jun 25 14:54:33.484860 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:54:33.484962 kernel: audit: type=1130 audit(1719327273.479:647): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:41796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:33.971000 audit[5700]: USER_ACCT pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:33.972978 sshd[5700]: Accepted publickey for core from 10.200.16.10 port 41796 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:33.995277 kernel: audit: type=1101 audit(1719327273.971:648): pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:33.996174 sshd[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:33.994000 audit[5700]: CRED_ACQ pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.020833 systemd-logind[1480]: New session 13 of user core. Jun 25 14:54:34.031153 kernel: audit: type=1103 audit(1719327273.994:649): pid=5700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.031223 kernel: audit: type=1006 audit(1719327273.994:650): pid=5700 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 14:54:34.031296 kernel: audit: type=1300 audit(1719327273.994:650): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4b2d480 a2=3 a3=1 items=0 ppid=1 pid=5700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:33.994000 audit[5700]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4b2d480 a2=3 a3=1 items=0 ppid=1 pid=5700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:34.030499 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:54:33.994000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:34.058088 kernel: audit: type=1327 audit(1719327273.994:650): proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:34.035000 audit[5700]: USER_START pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.081789 kernel: audit: type=1105 audit(1719327274.035:651): pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.037000 audit[5702]: CRED_ACQ pid=5702 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.102547 kernel: audit: type=1103 audit(1719327274.037:652): pid=5702 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.430465 sshd[5700]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:34.430000 audit[5700]: USER_END pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.435095 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:54:34.435987 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:41796.service: Deactivated successfully. Jun 25 14:54:34.455003 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:54:34.432000 audit[5700]: CRED_DISP pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.456305 systemd-logind[1480]: Removed session 13. Jun 25 14:54:34.473918 kernel: audit: type=1106 audit(1719327274.430:653): pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.474040 kernel: audit: type=1104 audit(1719327274.432:654): pid=5700 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:34.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.36:22-10.200.16.10:41796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:34.531705 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:41810.service - OpenSSH per-connection server daemon (10.200.16.10:41810). Jun 25 14:54:34.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.36:22-10.200.16.10:41810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:35.012000 audit[5713]: USER_ACCT pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:35.014309 sshd[5713]: Accepted publickey for core from 10.200.16.10 port 41810 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:35.015162 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:35.013000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:35.013000 audit[5713]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdbbeafd0 a2=3 a3=1 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:35.013000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:35.020692 systemd-logind[1480]: New session 14 of user core. Jun 25 14:54:35.024460 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:54:35.028000 audit[5713]: USER_START pid=5713 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:35.030000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:35.474174 sshd[5713]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:35.474000 audit[5713]: USER_END pid=5713 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:35.474000 audit[5713]: CRED_DISP pid=5713 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:35.477284 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:41810.service: Deactivated successfully. Jun 25 14:54:35.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.36:22-10.200.16.10:41810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:35.478064 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:54:35.478715 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:54:35.479687 systemd-logind[1480]: Removed session 14. Jun 25 14:54:35.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.36:22-10.200.16.10:47026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:35.564778 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:47026.service - OpenSSH per-connection server daemon (10.200.16.10:47026). Jun 25 14:54:36.046000 audit[5723]: USER_ACCT pid=5723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:36.047994 sshd[5723]: Accepted publickey for core from 10.200.16.10 port 47026 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:36.047000 audit[5723]: CRED_ACQ pid=5723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:36.048000 audit[5723]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4e79580 a2=3 a3=1 items=0 ppid=1 pid=5723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:36.048000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:36.049617 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:36.054113 systemd-logind[1480]: New session 15 of user core. Jun 25 14:54:36.058529 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:54:36.063000 audit[5723]: USER_START pid=5723 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:36.064000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:36.472453 sshd[5723]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:36.472000 audit[5723]: USER_END pid=5723 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:36.473000 audit[5723]: CRED_DISP pid=5723 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:36.476330 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:54:36.476603 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:47026.service: Deactivated successfully. Jun 25 14:54:36.477390 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:54:36.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.36:22-10.200.16.10:47026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:36.478439 systemd-logind[1480]: Removed session 15. Jun 25 14:54:41.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:41.562814 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:47032.service - OpenSSH per-connection server daemon (10.200.16.10:47032). Jun 25 14:54:41.567026 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 14:54:41.567112 kernel: audit: type=1130 audit(1719327281.561:674): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:42.070000 audit[5774]: USER_ACCT pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.072022 sshd[5774]: Accepted publickey for core from 10.200.16.10 port 47032 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:42.094267 kernel: audit: type=1101 audit(1719327282.070:675): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.093000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.094979 sshd[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:42.127552 kernel: audit: type=1103 audit(1719327282.093:676): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.127681 kernel: audit: type=1006 audit(1719327282.093:677): pid=5774 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 14:54:42.093000 audit[5774]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1e29f30 a2=3 a3=1 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.132506 systemd-logind[1480]: New session 16 of user core. Jun 25 14:54:42.159126 kernel: audit: type=1300 audit(1719327282.093:677): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1e29f30 a2=3 a3=1 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:42.159161 kernel: audit: type=1327 audit(1719327282.093:677): proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:42.093000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:42.158519 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:54:42.161000 audit[5774]: USER_START pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.163000 audit[5776]: CRED_ACQ pid=5776 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.206351 kernel: audit: type=1105 audit(1719327282.161:678): pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.206477 kernel: audit: type=1103 audit(1719327282.163:679): pid=5776 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.419719 update_engine[1484]: I0625 14:54:42.419589 1484 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 25 14:54:42.419719 update_engine[1484]: I0625 14:54:42.419648 1484 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 25 14:54:42.420051 update_engine[1484]: I0625 14:54:42.419847 1484 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 25 14:54:42.420224 update_engine[1484]: I0625 14:54:42.420202 1484 omaha_request_params.cc:62] Current group set to stable Jun 25 14:54:42.421555 update_engine[1484]: I0625 14:54:42.421093 1484 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 25 14:54:42.421555 update_engine[1484]: I0625 14:54:42.421115 1484 update_attempter.cc:643] Scheduling an action processor start. Jun 25 14:54:42.421555 update_engine[1484]: I0625 14:54:42.421133 1484 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 14:54:42.421555 update_engine[1484]: I0625 14:54:42.421171 1484 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 25 14:54:42.421555 update_engine[1484]: I0625 14:54:42.421243 1484 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 14:54:42.421555 update_engine[1484]: I0625 14:54:42.421248 1484 omaha_request_action.cc:272] Request: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: Jun 25 14:54:42.421555 update_engine[1484]: I0625 14:54:42.421252 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:54:42.421947 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 25 14:54:42.423043 update_engine[1484]: I0625 14:54:42.423000 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:54:42.423315 update_engine[1484]: I0625 14:54:42.423295 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:54:42.537705 update_engine[1484]: E0625 14:54:42.537664 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:54:42.537860 update_engine[1484]: I0625 14:54:42.537797 1484 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 25 14:54:42.540466 sshd[5774]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:42.540000 audit[5774]: USER_END pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.540000 audit[5774]: CRED_DISP pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.566336 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:47032.service: Deactivated successfully. Jun 25 14:54:42.567154 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:54:42.568685 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:54:42.569656 systemd-logind[1480]: Removed session 16. Jun 25 14:54:42.585142 kernel: audit: type=1106 audit(1719327282.540:680): pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.585271 kernel: audit: type=1104 audit(1719327282.540:681): pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:42.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.36:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:47.629650 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:36950.service - OpenSSH per-connection server daemon (10.200.16.10:36950). Jun 25 14:54:47.653969 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:54:47.654121 kernel: audit: type=1130 audit(1719327287.628:683): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:36950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:47.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:36950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:48.076000 audit[5791]: USER_ACCT pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.078433 sshd[5791]: Accepted publickey for core from 10.200.16.10 port 36950 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:48.101266 kernel: audit: type=1101 audit(1719327288.076:684): pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.101000 audit[5791]: CRED_ACQ pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.103052 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:48.137544 kernel: audit: type=1103 audit(1719327288.101:685): pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.137638 kernel: audit: type=1006 audit(1719327288.101:686): pid=5791 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 14:54:48.101000 audit[5791]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9712d10 a2=3 a3=1 items=0 ppid=1 pid=5791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:48.158828 kernel: audit: type=1300 audit(1719327288.101:686): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9712d10 a2=3 a3=1 items=0 ppid=1 pid=5791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:48.101000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:48.164778 systemd-logind[1480]: New session 17 of user core. Jun 25 14:54:48.173908 kernel: audit: type=1327 audit(1719327288.101:686): proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:48.173512 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:54:48.177000 audit[5791]: USER_START pid=5791 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.179000 audit[5793]: CRED_ACQ pid=5793 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.222413 kernel: audit: type=1105 audit(1719327288.177:687): pid=5791 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.222528 kernel: audit: type=1103 audit(1719327288.179:688): pid=5793 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.518489 sshd[5791]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:48.518000 audit[5791]: USER_END pid=5791 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.522163 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:54:48.522755 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:36950.service: Deactivated successfully. Jun 25 14:54:48.519000 audit[5791]: CRED_DISP pid=5791 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.544852 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:54:48.545938 systemd-logind[1480]: Removed session 17. Jun 25 14:54:48.564627 kernel: audit: type=1106 audit(1719327288.518:689): pid=5791 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.564725 kernel: audit: type=1104 audit(1719327288.519:690): pid=5791 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:48.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.36:22-10.200.16.10:36950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:48.605539 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:36964.service - OpenSSH per-connection server daemon (10.200.16.10:36964). Jun 25 14:54:48.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.36:22-10.200.16.10:36964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:49.086000 audit[5803]: USER_ACCT pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:49.087967 sshd[5803]: Accepted publickey for core from 10.200.16.10 port 36964 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:49.088000 audit[5803]: CRED_ACQ pid=5803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:49.088000 audit[5803]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe22ff300 a2=3 a3=1 items=0 ppid=1 pid=5803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:49.088000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:49.090171 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:49.094554 systemd-logind[1480]: New session 18 of user core. Jun 25 14:54:49.097473 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:54:49.100000 audit[5803]: USER_START pid=5803 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:49.102000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:49.633645 sshd[5803]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:49.633000 audit[5803]: USER_END pid=5803 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:49.634000 audit[5803]: CRED_DISP pid=5803 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:49.637121 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:36964.service: Deactivated successfully. Jun 25 14:54:49.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.36:22-10.200.16.10:36964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:49.637949 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:54:49.638544 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:54:49.639676 systemd-logind[1480]: Removed session 18. Jun 25 14:54:49.730648 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:36980.service - OpenSSH per-connection server daemon (10.200.16.10:36980). Jun 25 14:54:49.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.36:22-10.200.16.10:36980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:50.185000 audit[5813]: USER_ACCT pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:50.186605 sshd[5813]: Accepted publickey for core from 10.200.16.10 port 36980 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:50.186000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:50.186000 audit[5813]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe905b460 a2=3 a3=1 items=0 ppid=1 pid=5813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:50.186000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:50.188173 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:50.192288 systemd-logind[1480]: New session 19 of user core. Jun 25 14:54:50.197463 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:54:50.200000 audit[5813]: USER_START pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:50.202000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.004000 audit[5847]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:52.004000 audit[5847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffca659100 a2=0 a3=1 items=0 ppid=3023 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:52.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:52.006000 audit[5847]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5847 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:52.006000 audit[5847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffca659100 a2=0 a3=1 items=0 ppid=3023 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:52.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:52.020000 audit[5849]: NETFILTER_CFG table=filter:132 family=2 entries=32 op=nft_register_rule pid=5849 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:52.020000 audit[5849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffc800c3e0 a2=0 a3=1 items=0 ppid=3023 pid=5849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:52.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:52.023000 audit[5849]: NETFILTER_CFG table=nat:133 family=2 entries=22 op=nft_register_rule pid=5849 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:52.023000 audit[5849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffc800c3e0 a2=0 a3=1 items=0 ppid=3023 pid=5849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:52.023000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:52.080002 sshd[5813]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:52.080000 audit[5813]: USER_END pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.080000 audit[5813]: CRED_DISP pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.084006 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:36980.service: Deactivated successfully. Jun 25 14:54:52.084834 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:54:52.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.36:22-10.200.16.10:36980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:52.085193 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:54:52.086087 systemd-logind[1480]: Removed session 19. Jun 25 14:54:52.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:36990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:52.168636 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:36990.service - OpenSSH per-connection server daemon (10.200.16.10:36990). Jun 25 14:54:52.420176 update_engine[1484]: I0625 14:54:52.419829 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:54:52.420176 update_engine[1484]: I0625 14:54:52.420050 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:54:52.420561 update_engine[1484]: I0625 14:54:52.420272 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:54:52.531643 update_engine[1484]: E0625 14:54:52.531599 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:54:52.531819 update_engine[1484]: I0625 14:54:52.531724 1484 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 25 14:54:52.614000 audit[5852]: USER_ACCT pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.615892 sshd[5852]: Accepted publickey for core from 10.200.16.10 port 36990 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:52.615000 audit[5852]: CRED_ACQ pid=5852 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.615000 audit[5852]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff76bc020 a2=3 a3=1 items=0 ppid=1 pid=5852 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:52.615000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:52.617669 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:52.622425 systemd-logind[1480]: New session 20 of user core. Jun 25 14:54:52.628446 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:54:52.631000 audit[5852]: USER_START pid=5852 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.636795 kernel: kauditd_printk_skb: 41 callbacks suppressed Jun 25 14:54:52.636880 kernel: audit: type=1105 audit(1719327292.631:718): pid=5852 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.636000 audit[5854]: CRED_ACQ pid=5854 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:52.679727 kernel: audit: type=1103 audit(1719327292.636:719): pid=5854 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.124034 sshd[5852]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:53.124000 audit[5852]: USER_END pid=5852 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.127313 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:36990.service: Deactivated successfully. Jun 25 14:54:53.128083 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:54:53.148958 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:54:53.124000 audit[5852]: CRED_DISP pid=5852 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.150436 systemd-logind[1480]: Removed session 20. Jun 25 14:54:53.169108 kernel: audit: type=1106 audit(1719327293.124:720): pid=5852 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.169250 kernel: audit: type=1104 audit(1719327293.124:721): pid=5852 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:36990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:53.188484 kernel: audit: type=1131 audit(1719327293.124:722): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.36:22-10.200.16.10:36990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:53.207555 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:37004.service - OpenSSH per-connection server daemon (10.200.16.10:37004). Jun 25 14:54:53.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:37004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:53.229275 kernel: audit: type=1130 audit(1719327293.206:723): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:37004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:53.657000 audit[5861]: USER_ACCT pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.659183 sshd[5861]: Accepted publickey for core from 10.200.16.10 port 37004 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:53.679000 audit[5861]: CRED_ACQ pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.681509 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:53.699725 kernel: audit: type=1101 audit(1719327293.657:724): pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.699854 kernel: audit: type=1103 audit(1719327293.679:725): pid=5861 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.713270 kernel: audit: type=1006 audit(1719327293.679:726): pid=5861 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jun 25 14:54:53.679000 audit[5861]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd0d2a50 a2=3 a3=1 items=0 ppid=1 pid=5861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:53.719799 systemd-logind[1480]: New session 21 of user core. Jun 25 14:54:53.734904 kernel: audit: type=1300 audit(1719327293.679:726): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd0d2a50 a2=3 a3=1 items=0 ppid=1 pid=5861 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:53.679000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:53.738484 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 14:54:53.743000 audit[5861]: USER_START pid=5861 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:53.745000 audit[5863]: CRED_ACQ pid=5863 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:54.068297 sshd[5861]: pam_unix(sshd:session): session closed for user core Jun 25 14:54:54.068000 audit[5861]: USER_END pid=5861 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:54.068000 audit[5861]: CRED_DISP pid=5861 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:54.071415 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:37004.service: Deactivated successfully. Jun 25 14:54:54.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.36:22-10.200.16.10:37004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:54.072262 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 14:54:54.072865 systemd-logind[1480]: Session 21 logged out. Waiting for processes to exit. Jun 25 14:54:54.073966 systemd-logind[1480]: Removed session 21. Jun 25 14:54:58.799000 audit[5901]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:58.803537 kernel: kauditd_printk_skb: 6 callbacks suppressed Jun 25 14:54:58.803632 kernel: audit: type=1325 audit(1719327298.799:732): table=filter:134 family=2 entries=20 op=nft_register_rule pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:58.799000 audit[5901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffffb6a7330 a2=0 a3=1 items=0 ppid=3023 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.841137 kernel: audit: type=1300 audit(1719327298.799:732): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffffb6a7330 a2=0 a3=1 items=0 ppid=3023 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:58.854474 kernel: audit: type=1327 audit(1719327298.799:732): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:58.804000 audit[5901]: NETFILTER_CFG table=nat:135 family=2 entries=106 op=nft_register_chain pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:58.868050 kernel: audit: type=1325 audit(1719327298.804:733): table=nat:135 family=2 entries=106 op=nft_register_chain pid=5901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:54:58.804000 audit[5901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=fffffb6a7330 a2=0 a3=1 items=0 ppid=3023 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.892459 kernel: audit: type=1300 audit(1719327298.804:733): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=fffffb6a7330 a2=0 a3=1 items=0 ppid=3023 pid=5901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:58.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:58.905037 kernel: audit: type=1327 audit(1719327298.804:733): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:54:59.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:35642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:59.152317 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:35642.service - OpenSSH per-connection server daemon (10.200.16.10:35642). Jun 25 14:54:59.172323 kernel: audit: type=1130 audit(1719327299.152:734): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:35642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:59.609000 audit[5906]: USER_ACCT pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:59.609576 sshd[5906]: Accepted publickey for core from 10.200.16.10 port 35642 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:54:59.635159 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:59.635461 kernel: audit: type=1101 audit(1719327299.609:735): pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:59.635514 kernel: audit: type=1103 audit(1719327299.634:736): pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:59.634000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:59.641482 systemd-logind[1480]: New session 22 of user core. Jun 25 14:54:59.672260 kernel: audit: type=1006 audit(1719327299.634:737): pid=5906 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 14:54:59.634000 audit[5906]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdee94820 a2=3 a3=1 items=0 ppid=1 pid=5906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:59.634000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:54:59.671586 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 14:54:59.676000 audit[5906]: USER_START pid=5906 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:54:59.678000 audit[5908]: CRED_ACQ pid=5908 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:00.027360 sshd[5906]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:00.028000 audit[5906]: USER_END pid=5906 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:00.029000 audit[5906]: CRED_DISP pid=5906 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:00.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.36:22-10.200.16.10:35642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:00.031096 systemd-logind[1480]: Session 22 logged out. Waiting for processes to exit. Jun 25 14:55:00.031405 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:35642.service: Deactivated successfully. Jun 25 14:55:00.032163 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 14:55:00.032934 systemd-logind[1480]: Removed session 22. Jun 25 14:55:02.421576 update_engine[1484]: I0625 14:55:02.421522 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:55:02.421926 update_engine[1484]: I0625 14:55:02.421756 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:55:02.421968 update_engine[1484]: I0625 14:55:02.421957 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:55:02.435839 update_engine[1484]: E0625 14:55:02.435810 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:55:02.435930 update_engine[1484]: I0625 14:55:02.435920 1484 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 25 14:55:05.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:37402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:05.110572 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:37402.service - OpenSSH per-connection server daemon (10.200.16.10:37402). Jun 25 14:55:05.114718 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:55:05.114823 kernel: audit: type=1130 audit(1719327305.110:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:37402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:05.565273 sshd[5920]: Accepted publickey for core from 10.200.16.10 port 37402 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:05.564000 audit[5920]: USER_ACCT pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.587000 audit[5920]: CRED_ACQ pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.588596 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:05.607255 kernel: audit: type=1101 audit(1719327305.564:744): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.607374 kernel: audit: type=1103 audit(1719327305.587:745): pid=5920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.613324 systemd-logind[1480]: New session 23 of user core. Jun 25 14:55:05.646036 kernel: audit: type=1006 audit(1719327305.588:746): pid=5920 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 14:55:05.646117 kernel: audit: type=1300 audit(1719327305.588:746): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9aa35e0 a2=3 a3=1 items=0 ppid=1 pid=5920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.646143 kernel: audit: type=1327 audit(1719327305.588:746): proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:05.588000 audit[5920]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd9aa35e0 a2=3 a3=1 items=0 ppid=1 pid=5920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:05.588000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:05.645498 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 14:55:05.651000 audit[5920]: USER_START pid=5920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.675184 kernel: audit: type=1105 audit(1719327305.651:747): pid=5920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.653000 audit[5922]: CRED_ACQ pid=5922 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.694730 kernel: audit: type=1103 audit(1719327305.653:748): pid=5922 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.989483 sshd[5920]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:05.990000 audit[5920]: USER_END pid=5920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.992971 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 14:55:05.993799 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:37402.service: Deactivated successfully. Jun 25 14:55:06.014126 systemd-logind[1480]: Session 23 logged out. Waiting for processes to exit. Jun 25 14:55:05.991000 audit[5920]: CRED_DISP pid=5920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:06.015344 systemd-logind[1480]: Removed session 23. Jun 25 14:55:06.032667 kernel: audit: type=1106 audit(1719327305.990:749): pid=5920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:06.032775 kernel: audit: type=1104 audit(1719327305.991:750): pid=5920 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:05.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.36:22-10.200.16.10:37402 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:11.075658 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:37410.service - OpenSSH per-connection server daemon (10.200.16.10:37410). Jun 25 14:55:11.100248 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:55:11.100369 kernel: audit: type=1130 audit(1719327311.074:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:37410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:11.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:37410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:11.522826 sshd[5936]: Accepted publickey for core from 10.200.16.10 port 37410 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:11.521000 audit[5936]: USER_ACCT pid=5936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.544000 audit[5936]: CRED_ACQ pid=5936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.546449 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:11.566815 kernel: audit: type=1101 audit(1719327311.521:753): pid=5936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.566941 kernel: audit: type=1103 audit(1719327311.544:754): pid=5936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.571879 systemd-logind[1480]: New session 24 of user core. Jun 25 14:55:11.602888 kernel: audit: type=1006 audit(1719327311.544:755): pid=5936 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 14:55:11.602925 kernel: audit: type=1300 audit(1719327311.544:755): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea533bb0 a2=3 a3=1 items=0 ppid=1 pid=5936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:11.544000 audit[5936]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea533bb0 a2=3 a3=1 items=0 ppid=1 pid=5936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:11.602554 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 14:55:11.544000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:11.611833 kernel: audit: type=1327 audit(1719327311.544:755): proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:11.615581 kernel: audit: type=1105 audit(1719327311.608:756): pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.608000 audit[5936]: USER_START pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.636000 audit[5938]: CRED_ACQ pid=5938 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.655830 kernel: audit: type=1103 audit(1719327311.636:757): pid=5938 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.974912 sshd[5936]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:11.974000 audit[5936]: USER_END pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.977772 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 14:55:11.978450 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:37410.service: Deactivated successfully. Jun 25 14:55:11.980622 systemd-logind[1480]: Session 24 logged out. Waiting for processes to exit. Jun 25 14:55:11.981573 systemd-logind[1480]: Removed session 24. Jun 25 14:55:11.974000 audit[5936]: CRED_DISP pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:12.019682 kernel: audit: type=1106 audit(1719327311.974:758): pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:12.019834 kernel: audit: type=1104 audit(1719327311.974:759): pid=5936 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:11.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.36:22-10.200.16.10:37410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:12.420273 update_engine[1484]: I0625 14:55:12.419781 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:55:12.420273 update_engine[1484]: I0625 14:55:12.420003 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:55:12.420273 update_engine[1484]: I0625 14:55:12.420218 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:55:12.439919 update_engine[1484]: E0625 14:55:12.439883 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:55:12.440031 update_engine[1484]: I0625 14:55:12.439996 1484 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 14:55:12.440031 update_engine[1484]: I0625 14:55:12.440004 1484 omaha_request_action.cc:617] Omaha request response: Jun 25 14:55:12.440113 update_engine[1484]: E0625 14:55:12.440093 1484 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 25 14:55:12.440153 update_engine[1484]: I0625 14:55:12.440119 1484 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 25 14:55:12.440153 update_engine[1484]: I0625 14:55:12.440122 1484 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 14:55:12.440153 update_engine[1484]: I0625 14:55:12.440125 1484 update_attempter.cc:306] Processing Done. Jun 25 14:55:12.440153 update_engine[1484]: E0625 14:55:12.440138 1484 update_attempter.cc:619] Update failed. Jun 25 14:55:12.440153 update_engine[1484]: I0625 14:55:12.440141 1484 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 25 14:55:12.440153 update_engine[1484]: I0625 14:55:12.440145 1484 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 25 14:55:12.440153 update_engine[1484]: I0625 14:55:12.440148 1484 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 25 14:55:12.440339 update_engine[1484]: I0625 14:55:12.440254 1484 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 14:55:12.440339 update_engine[1484]: I0625 14:55:12.440274 1484 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 14:55:12.440339 update_engine[1484]: I0625 14:55:12.440276 1484 omaha_request_action.cc:272] Request: Jun 25 14:55:12.440339 update_engine[1484]: Jun 25 14:55:12.440339 update_engine[1484]: Jun 25 14:55:12.440339 update_engine[1484]: Jun 25 14:55:12.440339 update_engine[1484]: Jun 25 14:55:12.440339 update_engine[1484]: Jun 25 14:55:12.440339 update_engine[1484]: Jun 25 14:55:12.440339 update_engine[1484]: I0625 14:55:12.440280 1484 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:55:12.440539 update_engine[1484]: I0625 14:55:12.440430 1484 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:55:12.440861 update_engine[1484]: I0625 14:55:12.440595 1484 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:55:12.440923 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 25 14:55:12.448786 update_engine[1484]: E0625 14:55:12.448755 1484 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:55:12.448878 update_engine[1484]: I0625 14:55:12.448859 1484 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 14:55:12.448878 update_engine[1484]: I0625 14:55:12.448865 1484 omaha_request_action.cc:617] Omaha request response: Jun 25 14:55:12.448878 update_engine[1484]: I0625 14:55:12.448869 1484 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 14:55:12.448878 update_engine[1484]: I0625 14:55:12.448871 1484 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 14:55:12.448878 update_engine[1484]: I0625 14:55:12.448874 1484 update_attempter.cc:306] Processing Done. Jun 25 14:55:12.448878 update_engine[1484]: I0625 14:55:12.448879 1484 update_attempter.cc:310] Error event sent. Jun 25 14:55:12.449315 update_engine[1484]: I0625 14:55:12.448886 1484 update_check_scheduler.cc:74] Next update check in 42m51s Jun 25 14:55:12.449347 locksmithd[1528]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 25 14:55:14.408000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=3646736 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.408000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=72 a1=4010005d70 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:55:14.408000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:14.429000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=3646742 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.429000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=73 a1=40107e2060 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:55:14.429000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:14.443000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.443000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=71 a1=400fc2fc60 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:55:14.443000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:14.518000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.518000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=71 a1=40116e9440 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:55:14.518000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:14.555000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.555000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.555000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4002a29f40 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:14.555000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:14.555000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40026d7650 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:14.555000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:14.576000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.576000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=73 a1=40116e9830 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:55:14.576000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:14.577000 audit[2741]: AVC avc: denied { watch } for pid=2741 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c339,c679 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:14.577000 audit[2741]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=73 a1=400f35f8c0 a2=fc6 a3=0 items=0 ppid=2577 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c339,c679 key=(null) Jun 25 14:55:14.577000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E3230302E32302E3336002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B7562 Jun 25 14:55:17.073653 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:46410.service - OpenSSH per-connection server daemon (10.200.16.10:46410). Jun 25 14:55:17.100332 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 14:55:17.100466 kernel: audit: type=1130 audit(1719327317.072:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:46410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:17.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:46410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:17.526000 audit[5949]: USER_ACCT pid=5949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:17.528513 sshd[5949]: Accepted publickey for core from 10.200.16.10 port 46410 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:17.548000 audit[5949]: CRED_ACQ pid=5949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:17.550677 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:17.569208 kernel: audit: type=1101 audit(1719327317.526:770): pid=5949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:17.569338 kernel: audit: type=1103 audit(1719327317.548:771): pid=5949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:17.583119 kernel: audit: type=1006 audit(1719327317.549:772): pid=5949 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 14:55:17.549000 audit[5949]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5bb2830 a2=3 a3=1 items=0 ppid=1 pid=5949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:17.604891 kernel: audit: type=1300 audit(1719327317.549:772): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5bb2830 a2=3 a3=1 items=0 ppid=1 pid=5949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:17.549000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:17.613772 kernel: audit: type=1327 audit(1719327317.549:772): proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:17.617353 systemd-logind[1480]: New session 25 of user core. Jun 25 14:55:17.622451 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 14:55:17.625000 audit[5949]: USER_START pid=5949 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:17.651000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:17.671239 kernel: audit: type=1105 audit(1719327317.625:773): pid=5949 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:17.671339 kernel: audit: type=1103 audit(1719327317.651:774): pid=5951 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:18.002492 sshd[5949]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:18.002000 audit[5949]: USER_END pid=5949 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:18.006387 systemd-logind[1480]: Session 25 logged out. Waiting for processes to exit. Jun 25 14:55:18.007596 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 14:55:18.008681 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:46410.service: Deactivated successfully. Jun 25 14:55:18.010161 systemd-logind[1480]: Removed session 25. Jun 25 14:55:18.002000 audit[5949]: CRED_DISP pid=5949 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:18.047259 kernel: audit: type=1106 audit(1719327318.002:775): pid=5949 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:18.047396 kernel: audit: type=1104 audit(1719327318.002:776): pid=5949 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:18.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.36:22-10.200.16.10:46410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:19.838000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:19.838000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001d312e0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:19.838000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:19.866000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:19.866000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001d31300 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:19.866000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:19.873000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:19.873000 audit[2727]: AVC avc: denied { watch } for pid=2727 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:19.873000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4001d31320 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:19.873000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:19.873000 audit[2727]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001efe2a0 a2=fc6 a3=0 items=0 ppid=2583 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:19.873000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:20.767680 systemd[1]: run-containerd-runc-k8s.io-01bc73a65c422a8a705fd6407d7bf9c9542f81d1db733f71cab17cce6f397461-runc.k5WdVS.mount: Deactivated successfully. Jun 25 14:55:23.095757 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:46414.service - OpenSSH per-connection server daemon (10.200.16.10:46414). Jun 25 14:55:23.122002 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 14:55:23.122122 kernel: audit: type=1130 audit(1719327323.094:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:46414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:23.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:46414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:23.542000 audit[5990]: USER_ACCT pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.544044 sshd[5990]: Accepted publickey for core from 10.200.16.10 port 46414 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:23.564000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.566850 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:23.585167 kernel: audit: type=1101 audit(1719327323.542:783): pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.585307 kernel: audit: type=1103 audit(1719327323.564:784): pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.598944 kernel: audit: type=1006 audit(1719327323.565:785): pid=5990 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 14:55:23.565000 audit[5990]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9f1fe30 a2=3 a3=1 items=0 ppid=1 pid=5990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:23.621158 kernel: audit: type=1300 audit(1719327323.565:785): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9f1fe30 a2=3 a3=1 items=0 ppid=1 pid=5990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:23.623342 kernel: audit: type=1327 audit(1719327323.565:785): proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:23.565000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:23.633654 systemd-logind[1480]: New session 26 of user core. Jun 25 14:55:23.638479 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 14:55:23.641000 audit[5990]: USER_START pid=5990 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.665000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.686087 kernel: audit: type=1105 audit(1719327323.641:786): pid=5990 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.686242 kernel: audit: type=1103 audit(1719327323.665:787): pid=5992 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.992347 sshd[5990]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:23.992000 audit[5990]: USER_END pid=5990 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.996142 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 14:55:23.997247 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:46414.service: Deactivated successfully. Jun 25 14:55:24.018421 systemd-logind[1480]: Session 26 logged out. Waiting for processes to exit. Jun 25 14:55:23.992000 audit[5990]: CRED_DISP pid=5990 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:24.019524 systemd-logind[1480]: Removed session 26. Jun 25 14:55:24.036963 kernel: audit: type=1106 audit(1719327323.992:788): pid=5990 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:24.037102 kernel: audit: type=1104 audit(1719327323.992:789): pid=5990 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:23.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.36:22-10.200.16.10:46414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:28.775084 systemd[1]: run-containerd-runc-k8s.io-caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144-runc.DbLLLS.mount: Deactivated successfully. Jun 25 14:55:29.082629 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:37698.service - OpenSSH per-connection server daemon (10.200.16.10:37698). Jun 25 14:55:29.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:37698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:29.088599 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:55:29.088772 kernel: audit: type=1130 audit(1719327329.081:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:37698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:29.532000 audit[6029]: USER_ACCT pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.534050 sshd[6029]: Accepted publickey for core from 10.200.16.10 port 37698 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:29.536250 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:29.534000 audit[6029]: CRED_ACQ pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.575438 kernel: audit: type=1101 audit(1719327329.532:792): pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.575567 kernel: audit: type=1103 audit(1719327329.534:793): pid=6029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.580243 systemd-logind[1480]: New session 27 of user core. Jun 25 14:55:29.619851 kernel: audit: type=1006 audit(1719327329.534:794): pid=6029 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 14:55:29.619884 kernel: audit: type=1300 audit(1719327329.534:794): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff59bb9a0 a2=3 a3=1 items=0 ppid=1 pid=6029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:29.619912 kernel: audit: type=1327 audit(1719327329.534:794): proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:29.534000 audit[6029]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff59bb9a0 a2=3 a3=1 items=0 ppid=1 pid=6029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:29.534000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:29.619462 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 14:55:29.623000 audit[6029]: USER_START pid=6029 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.625000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.668052 kernel: audit: type=1105 audit(1719327329.623:795): pid=6029 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.668154 kernel: audit: type=1103 audit(1719327329.625:796): pid=6032 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.957496 sshd[6029]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:29.957000 audit[6029]: USER_END pid=6029 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.962353 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 14:55:29.963508 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:37698.service: Deactivated successfully. Jun 25 14:55:29.982498 systemd-logind[1480]: Session 27 logged out. Waiting for processes to exit. Jun 25 14:55:29.959000 audit[6029]: CRED_DISP pid=6029 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.983803 systemd-logind[1480]: Removed session 27. Jun 25 14:55:30.001050 kernel: audit: type=1106 audit(1719327329.957:797): pid=6029 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:30.001182 kernel: audit: type=1104 audit(1719327329.959:798): pid=6029 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:29.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.36:22-10.200.16.10:37698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:43.828403 systemd[1]: cri-containerd-21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d.scope: Deactivated successfully. Jun 25 14:55:43.841630 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:55:43.841718 kernel: audit: type=1334 audit(1719327343.833:800): prog-id=100 op=UNLOAD Jun 25 14:55:43.833000 audit: BPF prog-id=100 op=UNLOAD Jun 25 14:55:43.828697 systemd[1]: cri-containerd-21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d.scope: Consumed 3.393s CPU time. Jun 25 14:55:43.833000 audit: BPF prog-id=124 op=UNLOAD Jun 25 14:55:43.850103 kernel: audit: type=1334 audit(1719327343.833:801): prog-id=124 op=UNLOAD Jun 25 14:55:43.858565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d-rootfs.mount: Deactivated successfully. Jun 25 14:55:43.860110 containerd[1520]: time="2024-06-25T14:55:43.860054054Z" level=info msg="shim disconnected" id=21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d namespace=k8s.io Jun 25 14:55:43.860550 containerd[1520]: time="2024-06-25T14:55:43.860528303Z" level=warning msg="cleaning up after shim disconnected" id=21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d namespace=k8s.io Jun 25 14:55:43.860642 containerd[1520]: time="2024-06-25T14:55:43.860628625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:55:44.641656 kubelet[2883]: I0625 14:55:44.641619 2883 scope.go:117] "RemoveContainer" containerID="21f41c33d2cb0df476f60b61a53d24ca5304823db5d8f18a1f13198f4c46a29d" Jun 25 14:55:44.644388 containerd[1520]: time="2024-06-25T14:55:44.644348428Z" level=info msg="CreateContainer within sandbox \"1a8ea7facb37572965c6dff94fcf54fcc3287a29604e3926e9c5d5015585a3e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 14:55:44.671664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255352578.mount: Deactivated successfully. Jun 25 14:55:44.684840 containerd[1520]: time="2024-06-25T14:55:44.684797549Z" level=info msg="CreateContainer within sandbox \"1a8ea7facb37572965c6dff94fcf54fcc3287a29604e3926e9c5d5015585a3e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3746d62a52ecede8d4a46515d16463ec6ef0678a18343229697e1bf28ab01866\"" Jun 25 14:55:44.685588 containerd[1520]: time="2024-06-25T14:55:44.685560124Z" level=info msg="StartContainer for \"3746d62a52ecede8d4a46515d16463ec6ef0678a18343229697e1bf28ab01866\"" Jun 25 14:55:44.712428 systemd[1]: Started cri-containerd-3746d62a52ecede8d4a46515d16463ec6ef0678a18343229697e1bf28ab01866.scope - libcontainer container 3746d62a52ecede8d4a46515d16463ec6ef0678a18343229697e1bf28ab01866. Jun 25 14:55:44.723000 audit: BPF prog-id=219 op=LOAD Jun 25 14:55:44.729000 audit: BPF prog-id=220 op=LOAD Jun 25 14:55:44.735076 kernel: audit: type=1334 audit(1719327344.723:802): prog-id=219 op=LOAD Jun 25 14:55:44.735172 kernel: audit: type=1334 audit(1719327344.729:803): prog-id=220 op=LOAD Jun 25 14:55:44.729000 audit[6109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=2583 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:44.757022 kernel: audit: type=1300 audit(1719327344.729:803): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=2583 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:44.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343664363261353265636564653864346134363531356431363436 Jun 25 14:55:44.779031 kernel: audit: type=1327 audit(1719327344.729:803): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343664363261353265636564653864346134363531356431363436 Jun 25 14:55:44.729000 audit: BPF prog-id=221 op=LOAD Jun 25 14:55:44.785494 kernel: audit: type=1334 audit(1719327344.729:804): prog-id=221 op=LOAD Jun 25 14:55:44.729000 audit[6109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=2583 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:44.806956 kernel: audit: type=1300 audit(1719327344.729:804): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=2583 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:44.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343664363261353265636564653864346134363531356431363436 Jun 25 14:55:44.829414 kernel: audit: type=1327 audit(1719327344.729:804): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343664363261353265636564653864346134363531356431363436 Jun 25 14:55:44.735000 audit: BPF prog-id=221 op=UNLOAD Jun 25 14:55:44.837100 kernel: audit: type=1334 audit(1719327344.735:805): prog-id=221 op=UNLOAD Jun 25 14:55:44.735000 audit: BPF prog-id=220 op=UNLOAD Jun 25 14:55:44.735000 audit: BPF prog-id=222 op=LOAD Jun 25 14:55:44.735000 audit[6109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=2583 pid=6109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:44.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337343664363261353265636564653864346134363531356431363436 Jun 25 14:55:44.850570 containerd[1520]: time="2024-06-25T14:55:44.850501747Z" level=info msg="StartContainer for \"3746d62a52ecede8d4a46515d16463ec6ef0678a18343229697e1bf28ab01866\" returns successfully" Jun 25 14:55:44.953000 audit: BPF prog-id=140 op=UNLOAD Jun 25 14:55:44.953101 systemd[1]: cri-containerd-4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24.scope: Deactivated successfully. Jun 25 14:55:44.953423 systemd[1]: cri-containerd-4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24.scope: Consumed 5.549s CPU time. Jun 25 14:55:44.957000 audit: BPF prog-id=143 op=UNLOAD Jun 25 14:55:44.972931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24-rootfs.mount: Deactivated successfully. Jun 25 14:55:44.974940 containerd[1520]: time="2024-06-25T14:55:44.974877927Z" level=info msg="shim disconnected" id=4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24 namespace=k8s.io Jun 25 14:55:44.975396 containerd[1520]: time="2024-06-25T14:55:44.975372096Z" level=warning msg="cleaning up after shim disconnected" id=4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24 namespace=k8s.io Jun 25 14:55:44.975505 containerd[1520]: time="2024-06-25T14:55:44.975490018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:55:45.423538 kubelet[2883]: E0625 14:55:45.423345 2883 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.36:54438->10.200.20.10:2379: read: connection timed out" Jun 25 14:55:45.645384 kubelet[2883]: I0625 14:55:45.644986 2883 scope.go:117] "RemoveContainer" containerID="4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24" Jun 25 14:55:45.647841 containerd[1520]: time="2024-06-25T14:55:45.647806067Z" level=info msg="CreateContainer within sandbox \"438f9b5ae8e783471af4912fed90d9054ef72302c02c38976cb09966bf0863a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 14:55:45.677871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920236281.mount: Deactivated successfully. Jun 25 14:55:45.692996 containerd[1520]: time="2024-06-25T14:55:45.692952990Z" level=info msg="CreateContainer within sandbox \"438f9b5ae8e783471af4912fed90d9054ef72302c02c38976cb09966bf0863a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b\"" Jun 25 14:55:45.693664 containerd[1520]: time="2024-06-25T14:55:45.693639643Z" level=info msg="StartContainer for \"9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b\"" Jun 25 14:55:45.714493 systemd[1]: Started cri-containerd-9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b.scope - libcontainer container 9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b. Jun 25 14:55:45.726000 audit: BPF prog-id=223 op=LOAD Jun 25 14:55:45.726000 audit: BPF prog-id=224 op=LOAD Jun 25 14:55:45.726000 audit[6172]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3037 pid=6172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:45.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663866376437643030373134363364643861656466316465343531 Jun 25 14:55:45.727000 audit: BPF prog-id=225 op=LOAD Jun 25 14:55:45.727000 audit[6172]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3037 pid=6172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:45.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663866376437643030373134363364643861656466316465343531 Jun 25 14:55:45.727000 audit: BPF prog-id=225 op=UNLOAD Jun 25 14:55:45.727000 audit: BPF prog-id=224 op=UNLOAD Jun 25 14:55:45.727000 audit: BPF prog-id=226 op=LOAD Jun 25 14:55:45.727000 audit[6172]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3037 pid=6172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:45.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663866376437643030373134363364643861656466316465343531 Jun 25 14:55:45.743185 containerd[1520]: time="2024-06-25T14:55:45.743134368Z" level=info msg="StartContainer for \"9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b\" returns successfully" Jun 25 14:55:46.967000 audit[6120]: AVC avc: denied { watch } for pid=6120 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=3646740 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:46.967000 audit[6120]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=7 a1=4000449ec0 a2=fc6 a3=0 items=0 ppid=2583 pid=6120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:46.967000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:46.967000 audit[6120]: AVC avc: denied { watch } for pid=6120 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=3646734 scontext=system_u:system_r:container_t:s0:c579,c814 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:55:46.967000 audit[6120]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=7 a1=4000c80040 a2=fc6 a3=0 items=0 ppid=2583 pid=6120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c579,c814 key=(null) Jun 25 14:55:46.967000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:55:49.196936 kubelet[2883]: E0625 14:55:49.196899 2883 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.36:54210->10.200.20.10:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3815.2.4-a-39232a46a6.17dc471aeebb6b2a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3815.2.4-a-39232a46a6,UID:0ed2591aeadcd7f1b0d2fd3658588dc8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3815.2.4-a-39232a46a6,},FirstTimestamp:2024-06-25 14:55:38.748668714 +0000 UTC m=+200.931316949,LastTimestamp:2024-06-25 14:55:38.748668714 +0000 UTC m=+200.931316949,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.4-a-39232a46a6,}" Jun 25 14:55:50.533894 systemd[1]: cri-containerd-518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8.scope: Deactivated successfully. Jun 25 14:55:50.534178 systemd[1]: cri-containerd-518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8.scope: Consumed 1.815s CPU time. Jun 25 14:55:50.538000 audit: BPF prog-id=108 op=UNLOAD Jun 25 14:55:50.542430 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 14:55:50.542531 kernel: audit: type=1334 audit(1719327350.538:818): prog-id=108 op=UNLOAD Jun 25 14:55:50.538000 audit: BPF prog-id=115 op=UNLOAD Jun 25 14:55:50.554262 kernel: audit: type=1334 audit(1719327350.538:819): prog-id=115 op=UNLOAD Jun 25 14:55:50.563035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8-rootfs.mount: Deactivated successfully. Jun 25 14:55:50.564489 containerd[1520]: time="2024-06-25T14:55:50.564435204Z" level=info msg="shim disconnected" id=518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8 namespace=k8s.io Jun 25 14:55:50.564843 containerd[1520]: time="2024-06-25T14:55:50.564822331Z" level=warning msg="cleaning up after shim disconnected" id=518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8 namespace=k8s.io Jun 25 14:55:50.564936 containerd[1520]: time="2024-06-25T14:55:50.564921653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:55:50.662515 kubelet[2883]: I0625 14:55:50.662485 2883 scope.go:117] "RemoveContainer" containerID="518950c6c6ac2aa69f5d48c76752548e0aeeebf9296cb0d1916a4c1825b061d8" Jun 25 14:55:50.664809 containerd[1520]: time="2024-06-25T14:55:50.664761179Z" level=info msg="CreateContainer within sandbox \"07ea1d44177469ff745369f8b84c0a64f18003e2025ba16e85605cb606982fe0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 14:55:50.691210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36758100.mount: Deactivated successfully. Jun 25 14:55:50.710712 containerd[1520]: time="2024-06-25T14:55:50.710654330Z" level=info msg="CreateContainer within sandbox \"07ea1d44177469ff745369f8b84c0a64f18003e2025ba16e85605cb606982fe0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cf9828012260c0fbc99e81179b279edb456d4460356c1be377fce24f42ffe2b7\"" Jun 25 14:55:50.711287 containerd[1520]: time="2024-06-25T14:55:50.711262661Z" level=info msg="StartContainer for \"cf9828012260c0fbc99e81179b279edb456d4460356c1be377fce24f42ffe2b7\"" Jun 25 14:55:50.739438 systemd[1]: Started cri-containerd-cf9828012260c0fbc99e81179b279edb456d4460356c1be377fce24f42ffe2b7.scope - libcontainer container cf9828012260c0fbc99e81179b279edb456d4460356c1be377fce24f42ffe2b7. Jun 25 14:55:50.749000 audit: BPF prog-id=227 op=LOAD Jun 25 14:55:50.755000 audit: BPF prog-id=228 op=LOAD Jun 25 14:55:50.761077 kernel: audit: type=1334 audit(1719327350.749:820): prog-id=227 op=LOAD Jun 25 14:55:50.761181 kernel: audit: type=1334 audit(1719327350.755:821): prog-id=228 op=LOAD Jun 25 14:55:50.755000 audit[6243]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2576 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:50.783767 kernel: audit: type=1300 audit(1719327350.755:821): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2576 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:50.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393832383031323236306330666263393965383131373962323739 Jun 25 14:55:50.806476 kernel: audit: type=1327 audit(1719327350.755:821): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393832383031323236306330666263393965383131373962323739 Jun 25 14:55:50.755000 audit: BPF prog-id=229 op=LOAD Jun 25 14:55:50.836269 kernel: audit: type=1334 audit(1719327350.755:822): prog-id=229 op=LOAD Jun 25 14:55:50.836368 kernel: audit: type=1300 audit(1719327350.755:822): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2576 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:50.755000 audit[6243]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2576 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:50.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393832383031323236306330666263393965383131373962323739 Jun 25 14:55:50.858641 kernel: audit: type=1327 audit(1719327350.755:822): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393832383031323236306330666263393965383131373962323739 Jun 25 14:55:50.757000 audit: BPF prog-id=229 op=UNLOAD Jun 25 14:55:50.865121 kernel: audit: type=1334 audit(1719327350.757:823): prog-id=229 op=UNLOAD Jun 25 14:55:50.757000 audit: BPF prog-id=228 op=UNLOAD Jun 25 14:55:50.757000 audit: BPF prog-id=230 op=LOAD Jun 25 14:55:50.757000 audit[6243]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2576 pid=6243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:50.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393832383031323236306330666263393965383131373962323739 Jun 25 14:55:50.880387 containerd[1520]: time="2024-06-25T14:55:50.880329279Z" level=info msg="StartContainer for \"cf9828012260c0fbc99e81179b279edb456d4460356c1be377fce24f42ffe2b7\" returns successfully" Jun 25 14:55:55.083333 kubelet[2883]: I0625 14:55:55.083296 2883 status_manager.go:853] "Failed to get status for pod" podUID="0beab0512939231dbe35da11b8acbbcb" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-39232a46a6" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.36:54346->10.200.20.10:2379: read: connection timed out" Jun 25 14:55:55.424728 kubelet[2883]: E0625 14:55:55.424374 2883 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-39232a46a6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 14:55:57.261812 systemd[1]: cri-containerd-9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b.scope: Deactivated successfully. Jun 25 14:55:57.260000 audit: BPF prog-id=223 op=UNLOAD Jun 25 14:55:57.266160 kernel: kauditd_printk_skb: 4 callbacks suppressed Jun 25 14:55:57.266275 kernel: audit: type=1334 audit(1719327357.260:826): prog-id=223 op=UNLOAD Jun 25 14:55:57.273000 audit: BPF prog-id=226 op=UNLOAD Jun 25 14:55:57.281265 kernel: audit: type=1334 audit(1719327357.273:827): prog-id=226 op=UNLOAD Jun 25 14:55:57.292784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b-rootfs.mount: Deactivated successfully. Jun 25 14:55:57.345761 containerd[1520]: time="2024-06-25T14:55:57.345704256Z" level=info msg="shim disconnected" id=9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b namespace=k8s.io Jun 25 14:55:57.346201 containerd[1520]: time="2024-06-25T14:55:57.346178104Z" level=warning msg="cleaning up after shim disconnected" id=9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b namespace=k8s.io Jun 25 14:55:57.346328 containerd[1520]: time="2024-06-25T14:55:57.346311626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:55:57.678352 kubelet[2883]: I0625 14:55:57.678223 2883 scope.go:117] "RemoveContainer" containerID="4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24" Jun 25 14:55:57.678697 kubelet[2883]: I0625 14:55:57.678569 2883 scope.go:117] "RemoveContainer" containerID="9af8f7d7d0071463dd8aedf1de45141e705d38fbceed0489ae1b55d8a6abde3b" Jun 25 14:55:57.678845 kubelet[2883]: E0625 14:55:57.678797 2883 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76c4974c85-w9m2r_tigera-operator(4afe6c72-cdf5-4281-959e-875606dd6572)\"" pod="tigera-operator/tigera-operator-76c4974c85-w9m2r" podUID="4afe6c72-cdf5-4281-959e-875606dd6572" Jun 25 14:55:57.680281 containerd[1520]: time="2024-06-25T14:55:57.680221733Z" level=info msg="RemoveContainer for \"4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24\"" Jun 25 14:55:57.691905 containerd[1520]: time="2024-06-25T14:55:57.691854055Z" level=info msg="RemoveContainer for \"4bcd5e40d6930167db45f29f3644fd20f0874b5f911148298118dc8de9051a24\" returns successfully" Jun 25 14:55:58.774562 systemd[1]: run-containerd-runc-k8s.io-caeddda2893e4271d0c974791b33890d9eee866109d6a0d3736e745b12339144-runc.Whyooy.mount: Deactivated successfully. Jun 25 14:56:05.343259 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.358585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.375919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.391433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.407145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.422501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.431082 kubelet[2883]: E0625 14:56:05.431050 2883 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-3815.2.4-a-39232a46a6)" Jun 25 14:56:05.438620 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.438939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.447308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.455361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.463914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.472721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.481656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.489832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.490081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.506554 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.506810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.522203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.522451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.546239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.546495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.554367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.562513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.570405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.578499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.586588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.595363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.603572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.611941 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.628263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.628538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.628660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.644105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.644462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.660709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.668808 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.676595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.676861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.692530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.692839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.703062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.718981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.719337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.734781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.735108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.750730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.759207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.759597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.768471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.783950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.784283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.791791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.807535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.807834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.823189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.831376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.831669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.847145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.847440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.862832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.863151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.878864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.879261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.895142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.895507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.911525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.911879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.927287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.927611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.944052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.944419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.959588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.959910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.976204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.976532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.991799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:05.992171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.007418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.007708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.023674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.023964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.039287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.047698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.048015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.063510 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.063831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.080247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.080656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.096848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.097135 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.112687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.121165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.121466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.137288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.137570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.158775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.159133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.175033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.175397 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.183280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.199270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.199562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.215327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.215645 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.231447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.231809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.247578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.247879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.263660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.263964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.279341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.279660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.295281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.295590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.310907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.311210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.319145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.335300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.335667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.351665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.352005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.367508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.367832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.383158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.383455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.399418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.407976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.408269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.424905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#67 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 14:56:06.425253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001